Why This Matters
Arrays are the default container because they make position meaningful. If the base address and element size are known, the address of item is computed directly. That one fact explains indexing, cache-friendly scans, binary search, binary heaps, and dense numeric vectors.
For lower-level memory details, connect this page to ComputationPath: pointers and addresses, cache hierarchy, and memory allocation.
Core Idea
An array stores values in consecutive slots:
| Operation | Typical cost | Reason |
|---|---|---|
read A[i] | direct address computation | |
write A[i] | same address computation | |
| scan all items | one visit per item | |
| insert at front | items must shift | |
| append to dynamic array | amortized | occasional resize copies many items |
Worked example: in an array of 1,000 integers, reading the item at index 713 does not require reading the first 713 items. The machine computes the location and loads that slot.
Common Mistakes
- Treating insertion as cheap everywhere. Appending can be amortized , but insertion in the middle shifts a suffix.
- Forgetting bounds. Valid indices are usually
0throughn - 1. - Assuming arrays and linked lists are interchangeable. Arrays win on indexing and locality; linked lists win only when pointer-based splicing is the real workload.
Check Yourself
- Why is array indexing ?
- When can a linked list beat an array?
- What cost does dynamic-array resizing hide?
Next Topics
References
- Cormen, Leiserson, Rivest, Stein. Introduction to Algorithms (4th ed., 2022). Ch. 10.
- Sedgewick and Wayne. Algorithms (4th ed., 2011). Ch. 1.
- Bryant and O'Hallaron. Computer Systems: A Programmer's Perspective (3rd ed., 2015). Ch. 6.