Master the most fundamental data structure
Section 2 of 4
Why are arrays the fastest data structure? It is not just about indexing; it is about how they respect the hardware architecture.
In memory, an array is a single contiguous block of bytes. When you declare int arr[10], the operating system reserves enough space for 10 integers in a straight line.
Because elements are packed tightly, they must all be the same size. If you have a mix of different sized items, hardware cannot calculate offsets instantly.
The starting memory address of the first element.
Target = Base + (Index * ElementSize)
Notice how array elements are next to each other. Each element is 4 bytes (for integer) away from the previous one.
CPUs are much faster than RAM. To avoid waiting for data, CPUs use a small, lightning-fast memory called the Cache.
When the CPU requests element arr[i], the hardware assumes you will soon need arr[i+1]. It fetches a whole block (a "Cache Line") at once.
Searching an array is fast because elements are already in the cache (Cache Hit). Searching a Linked List is slow because elements are scattered (Cache Miss).
Higher is much slower
Fixed Size: Determined at compile time.
Automatic: Freed when function ends.
Speed: Fastest allocation method.
Dynamic: Size can change at runtime.
Manual: Must be freed or garbage collected.
Overhead: Slower access due to pointers.
When you iterate over an array sequentially, the CPU pre-fetches data from RAM to Cache. This is why for (int i=0; i<n; i++)is incredibly optimized on modern hardware.
Rule of thumb: Contiguous memory = Speed.