March 26, 20266 min read

Arrays vs Linked Lists: An Honest Comparison

The real differences between arrays and linked lists — memory layout, cache behavior, time complexity, and why arrays win almost every time in practice.

arrays linked-lists data-structures performance
Ad 336x280

Every CS course starts with arrays vs linked lists. You get the textbook comparison — arrays have O(1) random access, linked lists have O(1) insertion at the head, choose based on your use case. Neat and clean.

The reality is messier. Arrays dominate in almost every practical scenario, and the reasons go deeper than time complexity tables suggest.

How They Actually Sit in Memory

An array is a contiguous block of memory. Elements are packed side by side. If element 0 starts at address 1000 and each element is 4 bytes, element 5 is at address 1020. No indirection, no pointer chasing.

A linked list is scattered. Each node lives wherever malloc (or your language's allocator) decided to put it. Node 0 might be at address 1000, node 1 at address 58000, node 2 at address 3200. Each node holds a pointer to the next one.

This difference sounds academic until you understand CPU caches.

Cache Locality: Why Arrays Are Fast in Practice

Modern CPUs don't fetch individual bytes from RAM. They fetch cache lines — typically 64 bytes at a time. When you access arr[0], the CPU loads arr[0] through arr[15] (for 4-byte integers) into the L1 cache. Accessing arr[1] through arr[15] is then nearly free — it's already in cache.

With a linked list, accessing one node tells the CPU nothing about where the next node lives. Every node->next dereference is potentially a cache miss. A cache miss to main RAM costs roughly 100 nanoseconds. A cache hit costs about 1 nanosecond. That's a 100x difference that Big-O analysis completely ignores.

This is why iterating through a linked list is technically O(n) — same as an array — but in wall-clock time, the array version can be 10-50x faster for large n.

The Time Complexity Table

OperationArrayLinked List
Access by indexO(1)O(n)
Search (unsorted)O(n)O(n)
Insert at beginningO(n)O(1)
Insert at endO(1) amortized*O(1) with tail pointer
Insert at middleO(n)O(1) if you have the node**
Delete at beginningO(n)O(1)
Delete at middleO(n)O(1) if you have the node**
Memory per elementJust the dataData + pointer(s)
*Dynamic arrays double in size when full — amortized O(1) append. This is the catch that textbooks gloss over. Linked list insertion is O(1) only if you already have a reference to the insertion point**. Finding that point is O(n). So "insert in the middle" is really O(n) in most realistic scenarios.

When Arrays Win (Most of the Time)

Random access. If you need to access elements by index, it's not even close. Arrays are O(1), linked lists are O(n). Iteration. Both are O(n), but arrays are dramatically faster due to cache locality. If you're doing any kind of scan, filter, map, or reduce operation, arrays win. Memory efficiency. A singly linked list node for a 4-byte integer needs 4 bytes for the data plus 8 bytes for the pointer (on 64-bit systems). That's 3x the memory. Doubly linked? 5x. Arrays have zero overhead per element. Appending. Dynamic arrays (ArrayList, Vec, Python's list, JavaScript's Array) handle this with amortized O(1) appends. The occasional resize is expensive, but spread across all operations, it averages out.

When Linked Lists Actually Matter

Constant-time splicing. If you need to merge two lists or move a chunk of elements from one position to another, linked lists can do this in O(1) by rewiring pointers. Arrays require O(n) copying. Guaranteed O(1) insertion/deletion (not amortized). Real-time systems sometimes can't tolerate the occasional O(n) array resize. Linked lists have truly constant-time operations (given a node reference). Building data structures on top. LRU caches typically use a doubly linked list combined with a hash map. The hash map gives O(1) lookup, and the linked list gives O(1) reordering. You move recently accessed items to the front by rewiring two pointers. Lock-free concurrent data structures. Some concurrent algorithms use linked lists because pointer swaps can be done atomically with CAS (compare-and-swap) operations.

What Languages Actually Use

Notice how basically every mainstream language chose dynamic arrays as the default list type:

  • Python: list (dynamic array)
  • Java: ArrayList (dynamic array), LinkedList exists but the Java docs practically tell you not to use it
  • JavaScript: Array (dynamic array)
  • Go: slices (dynamic array)
  • Rust: Vec (dynamic array)
This isn't a coincidence. The people building these languages benchmarked both approaches and arrays won consistently for general-purpose use.

Java's LinkedList is a famous example — it implements both List and Deque, but ArrayList outperforms it on almost every operation in practice, including operations where linked lists have better theoretical complexity. The JDK team even considered deprecating it.

The Interview Angle

Interviewers love linked list problems not because linked lists are practically important, but because they test pointer manipulation skills. Reversing a linked list, detecting cycles, merging sorted lists — these are fundamentally about careful pointer/reference management.

Understanding why arrays usually win is also valuable in interviews. If someone asks you to choose a data structure and you pick a linked list, you should have a specific reason beyond "insertion is O(1)." Knowing about cache locality shows you think about real performance, not just theoretical complexity.

You can practice both array and linked list problems on CodeUp, where the problems are structured to build up from basic operations to the trickier patterns (like Floyd's cycle detection or the fast-slow pointer technique) that keep coming up in interviews.

The Honest Summary

Use arrays (or your language's dynamic array) by default. Reach for a linked list only when you have a specific, concrete reason: you need O(1) splicing, you're building an LRU cache, or you have hard real-time constraints that can't tolerate amortized costs. In 15 years of writing production code, I've used a standalone linked list maybe twice outside of interview prep.

Ad 728x90