Hazelcast

In-memory data grid for Java. Integrates distributed caching and real-time processing. Scales to hundreds of nodes, supports 1 million topics.

Cache ServerIn-Memory Data GridDistributed ComputingJavaReal-time Processing

Cache Server

Hazelcast

Overview

Hazelcast is an "in-memory data grid for Java" developed as an enterprise platform that integrates distributed caching and real-time processing. Beyond a simple cache server, it comprehensively provides distributed data management, stream processing, and distributed computing. It occupies a significant position in distributed application development in Java environments, offering powerful distributed data processing capabilities that scale to hundreds of nodes and support up to 1 million topics.

Details

Hazelcast 2025 edition maintains its established position as the definitive solution for distributed data management in enterprise Java environments. With nearly 20 years of development experience, it boasts mature APIs and industry-standard stability, with demand rapidly expanding due to the proliferation of microservices architecture. It integrates distributed caching, real-time stream processing, and distributed computing on a single platform, dramatically simplifying traditional multi-tool configurations. It provides extensive version support from Java 8 to the latest Java LTS and achieves excellent integration with Spring Boot, Kubernetes, and Docker environments.

Key Features

  • In-Memory Data Grid: High-speed data access and processing in distributed environments
  • Distributed Computing: Efficient computational processing leveraging data locality
  • Stream Processing (Jet): Real-time data pipelines and event processing
  • SQL Support: SQL queries and JOIN operations on distributed data
  • Management Center: Comprehensive monitoring, management, and visualization tools
  • Linear Scalability: Horizontal scaling to hundreds of nodes

Pros and Cons

Pros

  • Overwhelming track record and rich integration libraries in Java ecosystem
  • Unified platform for distributed caching, computation, and stream processing
  • Comprehensive monitoring and management features for enterprise use
  • Excellent integration with Spring Boot, Microservices, and Kubernetes environments
  • High-performance distributed processing through data locality
  • High reliability with ACID transactions and SQL support

Cons

  • Dependency on Java environment limits integration with other language ecosystems
  • High learning and operational costs due to complex feature set
  • Need for GC (Garbage Collection) tuning when operating large-scale clusters
  • Excessive features and resource consumption for simple caching use cases
  • Complex licensing structure with paid versions required for enterprise features
  • High memory usage, unsuitable for large-volume data

Reference Pages

Code Examples

Basic Setup and Hazelcast Instance Creation

// Maven dependency
<!--
<dependency>
    <groupId>com.hazelcast</groupId>
    <artifactId>hazelcast</artifactId>
    <version>5.3.6</version>
</dependency>
-->

import com.hazelcast.core.Hazelcast;
import com.hazelcast.core.HazelcastInstance;
import com.hazelcast.config.Config;

// Create Hazelcast instance with default configuration
HazelcastInstance hazelcast = Hazelcast.newHazelcastInstance();

// Create instance with custom configuration
Config config = new Config();
config.setClusterName("my-cluster");
config.getNetworkConfig().setPort(5701);
config.getNetworkConfig().setPortAutoIncrement(true);

HazelcastInstance customHazelcast = Hazelcast.newHazelcastInstance(config);

// Get distributed map and basic operations
Map<String, String> distributedMap = hazelcast.getMap("my-distributed-map");
distributedMap.put("key1", "value1");
String value = distributedMap.get("key1");
System.out.println("Retrieved value: " + value);

// Create and operate distributed list
List<String> distributedList = hazelcast.getList("my-list");
distributedList.add("item1");
distributedList.add("item2");
System.out.println("List size: " + distributedList.size());

// Distributed lock
Lock distributedLock = hazelcast.getLock("my-lock");
distributedLock.lock();
try {
    // Critical section
    System.out.println("Lock acquired, performing work...");
} finally {
    distributedLock.unlock();
}

// Proper shutdown
hazelcast.shutdown();

Distributed Cache and Near Cache Configuration

import com.hazelcast.config.*;
import com.hazelcast.core.HazelcastInstance;
import com.hazelcast.map.IMap;

// Hazelcast configuration with cache settings
Config config = new Config();

// Distributed map configuration
MapConfig mapConfig = new MapConfig("user-cache");
mapConfig.setTimeToLiveSeconds(300); // 5 minutes TTL
mapConfig.setMaxIdleSeconds(120);    // 2 minutes idle timeout
mapConfig.setBackupCount(1);         // Number of backups
mapConfig.setAsyncBackupCount(1);    // Number of async backups

// Near Cache configuration (local cache)
NearCacheConfig nearCacheConfig = new NearCacheConfig();
nearCacheConfig.setInMemoryFormat(InMemoryFormat.OBJECT);
nearCacheConfig.setInvalidateOnChange(true);
nearCacheConfig.setTimeToLiveSeconds(60);
nearCacheConfig.setCacheLocalEntries(false);

// Eviction configuration
EvictionConfig evictionConfig = new EvictionConfig();
evictionConfig.setEvictionPolicy(EvictionPolicy.LFU);
evictionConfig.setMaxSizePolicy(MaxSizePolicy.ENTRY_COUNT);
evictionConfig.setSize(1000);
nearCacheConfig.setEvictionConfig(evictionConfig);

mapConfig.setNearCacheConfig(nearCacheConfig);
config.addMapConfig(mapConfig);

HazelcastInstance hazelcast = Hazelcast.newHazelcastInstance(config);

// Distributed cache operations
IMap<String, User> userCache = hazelcast.getMap("user-cache");

// Cache user data
User user = new User("12345", "John Doe", "[email protected]");
userCache.put(user.getId(), user);

// Conditional PUT (only if key doesn't exist)
User existingUser = userCache.putIfAbsent("67890", 
    new User("67890", "Jane Smith", "[email protected]"));

// PUT with TTL specification (individual TTL setting)
userCache.put("temp-user", user, 30, TimeUnit.SECONDS);

// Batch operations
Map<String, User> batchUsers = new HashMap<>();
batchUsers.put("user1", new User("user1", "Alice Johnson", "[email protected]"));
batchUsers.put("user2", new User("user2", "Bob Wilson", "[email protected]"));
userCache.putAll(batchUsers);

// Search and filtering
Collection<User> activeUsers = userCache.values(
    Predicates.sql("active = true")
);

// Get statistics
LocalMapStats stats = userCache.getLocalMapStats();
System.out.println("Hit ratio: " + stats.getHitRatio());
System.out.println("Entry count: " + stats.getOwnedEntryCount());

// User class (simple example)
class User implements Serializable {
    private String id;
    private String name;
    private String email;
    private boolean active = true;
    
    public User(String id, String name, String email) {
        this.id = id;
        this.name = name;
        this.email = email;
    }
    
    // getters and setters
    public String getId() { return id; }
    public String getName() { return name; }
    public String getEmail() { return email; }
    public boolean isActive() { return active; }
}

Distributed Computing and ExecutorService

import com.hazelcast.core.IExecutorService;
import com.hazelcast.core.Member;
import java.util.concurrent.Future;
import java.util.concurrent.Callable;
import java.io.Serializable;

// Get distributed execution service
IExecutorService executor = hazelcast.getExecutorService("my-executor");

// Define Callable task
class DataProcessingTask implements Callable<String>, Serializable {
    private String data;
    
    public DataProcessingTask(String data) {
        this.data = data;
    }
    
    @Override
    public String call() throws Exception {
        // Simulate heavy processing
        Thread.sleep(1000);
        return "Processed: " + data + " on " + 
               Thread.currentThread().getName();
    }
}

// Execute task on specific member
Set<Member> members = hazelcast.getCluster().getMembers();
Member targetMember = members.iterator().next();
Future<String> result = executor.submitToMember(
    new DataProcessingTask("important-data"), 
    targetMember
);

System.out.println("Result: " + result.get());

// Execute task on key owner (data locality)
IMap<String, String> dataMap = hazelcast.getMap("data-map");
dataMap.put("customer-123", "customer data");

Future<String> localResult = executor.submitToKeyOwner(
    new KeyBasedTask("customer-123"), 
    "customer-123"
);

// Parallel task execution on all members
class ClusterStatsTask implements Callable<Integer>, Serializable {
    @Override
    public Integer call() throws Exception {
        // Calculate local data statistics
        HazelcastInstance localInstance = Hazelcast.getHazelcastInstanceByName("my-instance");
        IMap<String, String> localMap = localInstance.getMap("data-map");
        return localMap.getLocalMapStats().getOwnedEntryCount();
    }
}

Map<Member, Future<Integer>> allResults = executor.submitToAllMembers(
    new ClusterStatsTask()
);

int totalEntries = 0;
for (Future<Integer> future : allResults.values()) {
    totalEntries += future.get();
}
System.out.println("Total entries across cluster: " + totalEntries);

// Key-based task example
class KeyBasedTask implements Callable<String>, Serializable {
    private String key;
    
    public KeyBasedTask(String key) {
        this.key = key;
    }
    
    @Override
    public String call() throws Exception {
        HazelcastInstance instance = Hazelcast.getHazelcastInstanceByName("my-instance");
        IMap<String, String> map = instance.getMap("data-map");
        
        // Process local data related to key
        String data = map.get(key);
        return "Processed " + key + ": " + data;
    }
}

SQL Operations and Real-time Queries

import com.hazelcast.sql.SqlService;
import com.hazelcast.sql.SqlResult;
import com.hazelcast.sql.SqlRow;

// Get SQL service
SqlService sql = hazelcast.getSql();

// Create data mapping
String createMapping = """
    CREATE MAPPING IF NOT EXISTS employees (
        id BIGINT,
        name VARCHAR,
        department VARCHAR,
        salary DECIMAL,
        hire_date DATE
    )
    TYPE IMap
    OPTIONS (
        'keyFormat' = 'bigint',
        'valueFormat' = 'json'
    )
    """;

sql.execute(createMapping);

// Insert sample data
sql.execute("INSERT INTO employees VALUES (1, 'John Doe', 'Engineering', 75000, '2020-01-15')");
sql.execute("INSERT INTO employees VALUES (2, 'Jane Smith', 'Marketing', 65000, '2019-03-20')");
sql.execute("INSERT INTO employees VALUES (3, 'Bob Wilson', 'Engineering', 80000, '2021-07-10')");

// Basic SELECT query
try (SqlResult result = sql.execute("SELECT * FROM employees WHERE department = 'Engineering'")) {
    for (SqlRow row : result) {
        System.out.printf("ID: %d, Name: %s, Salary: %s%n",
            row.getObject("id"),
            row.getObject("name"),
            row.getObject("salary"));
    }
}

// Aggregation query
try (SqlResult result = sql.execute("""
    SELECT department, COUNT(*) as employee_count, AVG(salary) as avg_salary
    FROM employees 
    GROUP BY department
    """)) {
    
    for (SqlRow row : result) {
        System.out.printf("Department: %s, Count: %d, Avg Salary: %.2f%n",
            row.getObject("department"),
            row.getObject("employee_count"),
            row.getObject("avg_salary"));
    }
}

// Streaming query (real-time processing)
String streamingQuery = """
    CREATE JOB employee_salary_monitor AS
    SELECT department, COUNT(*) as count, AVG(salary) as avg_salary
    FROM TABLE(IMPOSE_ORDER(TABLE employees, DESCRIPTOR(hire_date), INTERVAL '1' MINUTE))
    GROUP BY department, TUMBLE(hire_date, INTERVAL '1' HOUR)
    """;

sql.execute(streamingQuery);

// Continuous Query Cache
import com.hazelcast.map.QueryCache;
import com.hazelcast.query.Predicates;

IMap<String, Employee> employeeMap = hazelcast.getMap("employees");

// Query cache for high-salary employees
QueryCache<String, Employee> highSalaryCache = employeeMap.getQueryCache(
    "high-salary-employees",
    Predicates.sql("salary > 70000"),
    true
);

// Add listener to query cache
highSalaryCache.addEntryListener(new EntryListener<String, Employee>() {
    @Override
    public void entryAdded(EntryEvent<String, Employee> event) {
        System.out.println("High salary employee added: " + event.getValue().getName());
    }
    
    @Override
    public void entryUpdated(EntryEvent<String, Employee> event) {
        System.out.println("High salary employee updated: " + event.getValue().getName());
    }
    
    @Override
    public void entryRemoved(EntryEvent<String, Employee> event) {
        System.out.println("High salary employee removed: " + event.getOldValue().getName());
    }
}, false);

Spring Boot Integration and Microservices Coordination

// Spring Boot configuration (application.yml)
/*
hazelcast:
  cluster-name: microservices-cluster
  network:
    port: 5701
    port-auto-increment: true
    join:
      multicast:
        enabled: false
      kubernetes:
        enabled: true
        service-name: hazelcast-service
  map:
    user-sessions:
      time-to-live-seconds: 1800
      backup-count: 1
*/

// Spring Boot Configuration class
@Configuration
@EnableCaching
public class HazelcastConfig {
    
    @Bean
    public Config hazelcastConfig() {
        Config config = new Config();
        config.setClusterName("microservices-cluster");
        
        // Map configuration
        MapConfig sessionConfig = new MapConfig("user-sessions");
        sessionConfig.setTimeToLiveSeconds(1800); // 30 minutes
        sessionConfig.setBackupCount(1);
        config.addMapConfig(sessionConfig);
        
        // Kubernetes discovery configuration
        config.getNetworkConfig().getJoin().getMulticastConfig().setEnabled(false);
        config.getNetworkConfig().getJoin().getKubernetesConfig().setEnabled(true)
              .setProperty("service-name", "hazelcast-service");
        
        return config;
    }
    
    @Bean
    public HazelcastInstance hazelcastInstance(Config config) {
        return Hazelcast.newHazelcastInstance(config);
    }
    
    @Bean
    public CacheManager cacheManager(HazelcastInstance hazelcastInstance) {
        return new HazelcastCacheManager(hazelcastInstance);
    }
}

// Service with caching functionality
@Service
public class UserService {
    
    @Autowired
    private UserRepository userRepository;
    
    @Cacheable(value = "user-cache", key = "#userId")
    public User getUserById(String userId) {
        System.out.println("Fetching user from database: " + userId);
        return userRepository.findById(userId)
                .orElseThrow(() -> new UserNotFoundException(userId));
    }
    
    @CacheEvict(value = "user-cache", key = "#user.id")
    public User updateUser(User user) {
        return userRepository.save(user);
    }
    
    @CacheEvict(value = "user-cache", allEntries = true)
    public void clearAllUserCache() {
        System.out.println("All user cache cleared");
    }
}

// Distributed session management
@RestController
public class SessionController {
    
    @Autowired
    private HazelcastInstance hazelcastInstance;
    
    @PostMapping("/session/create")
    public ResponseEntity<String> createSession(@RequestBody LoginRequest request) {
        String sessionId = UUID.randomUUID().toString();
        
        IMap<String, UserSession> sessions = hazelcastInstance.getMap("user-sessions");
        
        UserSession session = new UserSession();
        session.setUserId(request.getUserId());
        session.setCreatedAt(LocalDateTime.now());
        session.setLastAccessTime(LocalDateTime.now());
        
        sessions.put(sessionId, session, 30, TimeUnit.MINUTES);
        
        return ResponseEntity.ok(sessionId);
    }
    
    @GetMapping("/session/{sessionId}")
    public ResponseEntity<UserSession> getSession(@PathVariable String sessionId) {
        IMap<String, UserSession> sessions = hazelcastInstance.getMap("user-sessions");
        
        UserSession session = sessions.get(sessionId);
        if (session != null) {
            // Update access time
            session.setLastAccessTime(LocalDateTime.now());
            sessions.put(sessionId, session, 30, TimeUnit.MINUTES);
            return ResponseEntity.ok(session);
        } else {
            return ResponseEntity.notFound().build();
        }
    }
}

// Distributed event processing
@Component
public class EventProcessor {
    
    @Autowired
    private HazelcastInstance hazelcastInstance;
    
    @PostConstruct
    public void setupEventProcessing() {
        // Event distribution via distributed topic
        ITopic<OrderEvent> orderTopic = hazelcastInstance.getTopic("order-events");
        
        orderTopic.addMessageListener(message -> {
            OrderEvent event = message.getMessageObject();
            System.out.println("Processing order event: " + event.getOrderId());
            
            // Order processing logic
            processOrderEvent(event);
        });
    }
    
    @EventListener
    public void handleUserAction(UserActionEvent event) {
        ITopic<UserActionEvent> topic = hazelcastInstance.getTopic("user-actions");
        topic.publish(event);
    }
    
    private void processOrderEvent(OrderEvent event) {
        // Prevent duplicate processing using distributed lock
        Lock orderLock = hazelcastInstance.getLock("order-lock-" + event.getOrderId());
        
        if (orderLock.tryLock(5, TimeUnit.SECONDS)) {
            try {
                // Order processing logic
                System.out.println("Processing order: " + event.getOrderId());
            } finally {
                orderLock.unlock();
            }
        }
    }
}

Cluster Monitoring and Management Center Integration

// Management and monitoring clustering configuration
@Component
public class HazelcastMonitoringService {
    
    @Autowired
    private HazelcastInstance hazelcastInstance;
    
    public ClusterHealthInfo getClusterHealth() {
        Cluster cluster = hazelcastInstance.getCluster();
        
        ClusterHealthInfo health = new ClusterHealthInfo();
        health.setMemberCount(cluster.getMembers().size());
        health.setClusterState(cluster.getClusterState().toString());
        health.setClusterVersion(cluster.getClusterVersion().toString());
        
        // Statistics for each member
        for (Member member : cluster.getMembers()) {
            MemberHealthStats memberStats = new MemberHealthStats();
            memberStats.setAddress(member.getAddress().toString());
            memberStats.setUuid(member.getUuid().toString());
            
            health.addMemberStats(memberStats);
        }
        
        return health;
    }
    
    public Map<String, Object> getMapStatistics(String mapName) {
        IMap<?, ?> map = hazelcastInstance.getMap(mapName);
        LocalMapStats stats = map.getLocalMapStats();
        
        Map<String, Object> statistics = new HashMap<>();
        statistics.put("ownedEntryCount", stats.getOwnedEntryCount());
        statistics.put("backupEntryCount", stats.getBackupEntryCount());
        statistics.put("hits", stats.getHits());
        statistics.put("hitRatio", stats.getHitRatio());
        statistics.put("putOperationCount", stats.getPutOperationCount());
        statistics.put("getOperationCount", stats.getGetOperationCount());
        statistics.put("removeOperationCount", stats.getRemoveOperationCount());
        statistics.put("totalPutLatency", stats.getTotalPutLatency());
        statistics.put("totalGetLatency", stats.getTotalGetLatency());
        statistics.put("maxPutLatency", stats.getMaxPutLatency());
        statistics.put("maxGetLatency", stats.getMaxGetLatency());
        
        return statistics;
    }
    
    @Scheduled(fixedRate = 60000) // Every minute
    public void logClusterStatistics() {
        ClusterHealthInfo health = getClusterHealth();
        System.out.println("Cluster Health - Members: " + health.getMemberCount() 
                         + ", State: " + health.getClusterState());
        
        // Statistics for key maps
        String[] monitoredMaps = {"user-cache", "user-sessions", "order-cache"};
        for (String mapName : monitoredMaps) {
            Map<String, Object> stats = getMapStatistics(mapName);
            System.out.println("Map [" + mapName + "] - Entries: " 
                             + stats.get("ownedEntryCount") + ", Hit Ratio: " 
                             + stats.get("hitRatio"));
        }
    }
}