Modern Java for Rust Engineers
Module 2 of 8 Intermediate 30 min

Thinking About Memory Without a Borrow Checker

Prerequisites: build-tools-modules-project-structure

What You'll Learn

Why This Matters

In Rust, you track ownership and the compiler tracks lifetimes. Memory is freed exactly when the owner goes out of scope — deterministically, at compile-time-proven points. This model is so ingrained that switching to Java can feel like working without a safety net.

Java's answer is the garbage collector. You allocate freely, share references without restriction, and the JVM cleans up objects when they become unreachable — on its own schedule. The trade-off is real: you lose deterministic deallocation, but you gain freedom from the cognitive overhead of lifetimes and borrowing on every single line of code. For most application code, that trade-off is excellent. For latency-sensitive inner loops processing millions of records, you need to understand what the GC is doing and how to work with it rather than against it.

This module gives you the mental model you need to reason about memory in Java without a borrow checker — and to recognize the relatively rare situations where allocations actually matter.

Core Concept

The heap is the default. In Rust, a Payment struct lives on the stack by default; only Box<Payment> or putting it in a collection moves it to the heap. In Java, every new Payment(...) call allocates on the heap. There is no stack allocation syntax. The borrow checker disappears, and with it goes the automatic, deterministic cleanup. Instead, the JVM's garbage collector (GC) periodically identifies objects that have no more references pointing to them and reclaims their memory.

Non-Deterministic Deallocation

When a local variable goes out of scope in Java, the object it pointed to is not freed. The variable's slot on the stack frame is reclaimed, but the object on the heap lives on until the GC decides to collect it. That decision is up to the GC algorithm, heap pressure, and the JVM's internal scheduler. It might happen microseconds later, or seconds later, or not until the next major collection.

Note: Do not assume Java objects are freed when they go out of a code block scope. They are freed when the GC decides — which may be much later, or never if a reference escapes into a long-lived collection or field.

This is not a bug; it is the design. The upshot for you as a Rust engineer is: stop thinking about who owns what, and start thinking about how long objects live and whether they are likely to be long-lived. That is the key cognitive shift.

The Generational Heap

Modern Java GCs exploit the generational hypothesis: most objects are short-lived. A Payment object created inside a method call and not stored anywhere will likely be unreachable by the time that method returns. The JVM takes advantage of this by dividing the heap into generations:

The practical implication: if your code creates many short-lived objects (typical for stream pipelines, temporary results, intermediate transformations), they will almost all die in the young generation — cheap to collect. If your code inadvertently promotes objects to the old generation (by caching them, storing them in static fields, or keeping references in long-lived collections), you may trigger expensive major GCs.

Modern GC Algorithms

Java GC in 2024 is not the stop-the-world GC from Java 6. Three GC algorithms are worth knowing:

The point is not to memorize these but to understand that non-deterministic does not mean uncontrolled. Modern GCs are sophisticated, and for the vast majority of Java services, GC pauses are not a problem you will need to tune.

Escape Analysis

The JIT compiler performs escape analysis at runtime: it analyzes whether an object allocated with new can possibly be accessed outside the creating method. If an object is proven to not escape — no reference to it is passed out, stored in a field, or returned — the JIT may:

This sounds like Rust's default stack allocation, but there are important differences. Escape analysis is a runtime optimization applied after JIT warmup — it is not guaranteed and it does not apply to cold code. Rust's stack allocation is a compile-time guarantee that applies everywhere, always. For a batch job that runs to completion before the JIT warms up, escape analysis provides no benefit. For a long-running server that has processed millions of requests, the JIT may have already optimized away many short-lived allocations.

Rust comparison: Rust's stack allocation is a first-class language feature; the developer controls it explicitly. Java's escape analysis is a best-effort JIT optimization that the developer cannot directly invoke or guarantee. Do not rely on escape analysis for correctness or performance guarantees.

AutoCloseable and try-with-resources

Memory is not the only resource. File handles, database connections, network sockets — these need deterministic cleanup regardless of what the GC does. Java provides AutoCloseable for this: any class that implements AutoCloseable exposes a close() method, and the try-with-resources syntax guarantees that close() is called when the block exits, whether normally or via exception.

// Java
try (var conn = dataSource.getConnection();
     var stmt = conn.prepareStatement("SELECT ...")) {
    // use conn and stmt
} // conn.close() and stmt.close() called here, deterministically

This is Java's equivalent to Rust's Drop trait. The mechanism differs — explicit syntax in Java versus automatic compiler-inserted cleanup in Rust — but the effect is the same: the resource is released at a predictable point. This matters for any resource with a finite pool (database connections, file descriptors). You will see this pattern again in Module 06 when we wrap payment processing in a database transaction.

Concrete Example

Consider the running example: a payment processor that receives one million Payment records for batch processing.

// Java
public void processBatch(List<Payment> payments) {
    for (Payment payment : payments) {
        PaymentResult result = validate(payment);
        // result is created here, used once, then unreachable
        log(result);
        // After log() returns, result has no more references
        // It will be collected in the next minor GC — it never leaves the young gen
    }
    // payments list itself becomes unreachable after the method returns
    // The Payment objects it referenced also become unreachable
    // The GC reclaims all of them on its own schedule
}

Where are the Payment objects? On the heap — all one million of them. The list holds references to them; as long as the list is alive, so are they. After processBatch returns and the list goes out of scope, all one million Payment objects become unreachable and are eligible for collection.

The short-lived PaymentResult objects created inside the loop are a good example of objects that will almost certainly die in the young generation: they are created, used, and become unreachable within a single loop iteration — the classic short-lived allocation pattern that generational GC is designed for.

In Rust, this would look different:

// Rust equivalent
fn process_batch(payments: Vec<Payment>) {
    for payment in &payments {
        let result = validate(payment);
        log(&result);
        // result is dropped here, deterministically, end of this iteration
    }
    // payments is dropped here, deterministically, end of the function
    // Each Payment inside it is dropped as the Vec is deallocated
}

In Rust, each result is dropped at the end of the loop body — no GC needed. The payments vector and every Payment inside it are dropped when process_batch returns. The deallocation is woven into the control flow at compile time.

Would the Java version cause performance problems? Probably not. The PaymentResult objects in the loop are short-lived and will be collected cheaply in minor GCs. The Payment objects survive slightly longer but become unreachable when the method returns. A production Java service processing one million payments in a batch would handle this without GC trouble — provided the Payment objects are not being accumulated in a long-lived cache at the same time.

Analogy

Apartment leasing vs. homeownership.

Java's GC is like renting an apartment: the landlord (the GC) eventually reclaims unused units. You move out (the object becomes unreachable), and at some point the landlord notices and re-rents the unit. You do not need to know when the landlord will clean it; you just leave. The landlord monitors and cleans up periodically, on their schedule, not yours.

Rust's ownership model is like owning a home: you decide exactly when to buy (allocate) and sell (deallocate). There is no surprise eviction and no waiting for the landlord. You pay the upfront cost of explicit ownership decisions, but you gain perfect predictability.

Modern low-latency GCs (ZGC, Shenandoah) are more like a landlord with a very efficient cleaning crew that works concurrently while you are still living there, touching your unit only for tiny moments. The eviction is still unpredictable, but the interruption is minimal.

Going Deeper

Project Valhalla value types represent Java's acknowledgment that heap-only allocation is sometimes the wrong trade-off. Value types (previewed in Java 21+ but not yet stable) allow stack-allocated records — objects that are passed by value like primitives, not by reference. For a Payment record, a value type declaration would mean that the JVM can represent a Payment[] array as a flat contiguous block of memory rather than an array of pointers to scattered heap objects. This is exactly how Rust lays out Vec<Payment> by default.

Until Valhalla stabilizes, you can achieve similar performance in hot paths by using primitive arrays manually or by restructuring data to reduce object count. For most application code, this is premature optimization.

GC and concurrency. In Java, GC pauses affect all threads — a stop-the-world collection halts every thread in the JVM. G1GC and ZGC minimize this with concurrent marking and compaction that happen while your application threads run. But the worst-case scenario (a full GC triggered by promotion failure) can pause everything. For a payment processor with strict latency SLAs, understanding this risk — and choosing ZGC for its sub-millisecond pauses — is a real operational decision.

Object identity vs. value equality. In Rust, == on a struct compares field values by default (if you #[derive(PartialEq)]). In Java, == on objects compares references (identity), not values. Two Payment objects with identical fields are not == unless they are literally the same heap object. Use .equals() for value comparison. Records auto-generate a value-based .equals(), which matches the Rust intuition — but only for records. For regular classes, you must implement .equals() yourself or risk subtle bugs.

Common Misconceptions

1. "Java objects are freed when they go out of scope."

This is the Rust model, not the Java model. In Java, a variable going out of scope simply means that reference is no longer held — the object itself lives on the heap until the GC finds no remaining references pointing to it. A Payment object stored in a List field on a long-lived service object will never be collected until that list is cleared or the service object itself becomes unreachable. Scope does not control lifetime; reachability does.

2. "Escape analysis makes Java as memory-efficient as Rust."

Escape analysis is a real and valuable JIT optimization, but it comes with significant caveats: it requires JIT warmup, the analysis can fail if the JIT cannot prove the escape property (and JVM implementations differ), and it applies only to specific allocation patterns. Rust's stack allocation is a compile-time guarantee that requires zero warmup and applies unconditionally. For batch processing or short-lived CLI tools, escape analysis may provide zero benefit. Do not rely on it as an equivalent to Rust's stack allocation.

3. "Modern GC means I never need to think about allocations."

For typical CRUD application code, this is largely true — and it is one of Java's genuine advantages over Rust for certain problem domains. But for high-throughput services, latency-sensitive paths, and inner loops that run millions of times, allocation patterns matter. Creating many large objects in a tight loop can overwhelm the young generation and trigger major GC pauses. The practical advice: ignore allocations during normal development, but profile before optimizing. Do not guess; measure.

Check Your Understanding

  1. You write a Java method that creates a Payment object, calls a validation method with it, and returns. What happens to the Payment object after the method returns?

    Answer: The Payment object was allocated on the heap when you called new Payment(...). After the method returns, the local variable holding the reference goes out of scope, so the reference is gone. The object itself remains on the heap until the GC determines it is unreachable (no other references point to it) and collects it. This could happen in the next minor GC, or later. It is not deterministic.

  2. You are processing a batch of one million payments in a loop and creating a temporary PaymentResult on each iteration. You notice the JVM is spending a lot of time in minor GCs. What is happening, and what could you do about it?

    Answer: You are creating one million short-lived PaymentResult objects. They die young (good — they stay in the young generation), but even minor GCs have overhead. If the rate of allocation is high enough, you can overwhelm the Eden space and trigger continuous minor GCs. Strategies to reduce this: (1) Reuse a result object if the result structure allows mutation (though this sacrifices immutability), or (2) redesign the logic to avoid creating intermediate result objects — for example, by encoding the result as a boolean or an enum value for the simple success/failure case instead of allocating a full object.

  3. What is the difference between AutoCloseable.close() and GC finalization in Java? When should you use each?

    Answer: AutoCloseable.close() in a try-with-resources block is called deterministically at the end of the block, whether the block exits normally or via exception. It is the right mechanism for releasing resources like database connections, file handles, and network sockets — anything with a finite pool or that holds OS-level resources. GC finalization (finalize(), now deprecated, or Cleaner in modern Java) is non-deterministic: the GC may call it whenever it collects the object, or never at all if the object is never collected. Finalization is unreliable for resource cleanup. Always use try-with-resources for I/O resources.

  4. A colleague argues that Java's escape analysis means records used as local variables are stack-allocated, just like Rust structs. Is this correct?

    Answer: Not exactly. Escape analysis may cause the JIT to stack-allocate or eliminate short-lived records, but this is a runtime optimization — not a guarantee. It requires JIT warmup, can fail if the JIT's analysis is uncertain, and is not visible or controllable by the programmer. In Rust, a struct declared as a local variable is always on the stack (unless boxed). The intent is similar, but the reliability and timing differ significantly. For production performance guarantees, do not rely on escape analysis as an equivalent to Rust's stack allocation.

  5. In Rust, when a Vec<Payment> goes out of scope, each Payment in it is dropped deterministically. What is the equivalent behavior in Java when a List<Payment> goes out of scope?

    Answer: There is no equivalent. When a List<Payment> variable goes out of scope in Java, that reference is released, but the List object and the Payment objects it references remain on the heap until the GC collects them. The GC will eventually reclaim them if no other references exist, but the timing is non-deterministic. There is no Drop equivalent for Java objects — no code runs automatically when the list becomes unreachable (unlike close() on AutoCloseable, which you must explicitly trigger via try-with-resources).

Key Takeaways

References