Loading...
Master array structures and memory optimization techniques
Section 2 of 4
// Static array - Stack allocated int arr[1000]; // 4KB on stack char buffer[256]; // 256 bytes on stack // Automatic cleanup when scope ends
// Dynamic array - Heap allocated int* arr = new int[size]; // Runtime size int* buffer = malloc(1000 * sizeof(int)); // Manual cleanup required delete[] arr; free(buffer);
CPUs load data in chunks called cache lines (typically 64 bytes). Sequential array access is faster because multiple elements are loaded together, while random access causes more cache misses.
| Operation | Stack Array | Heap Array | Notes |
|---|---|---|---|
| Allocation | ~1 cycle | ~100+ cycles | Stack pointer increment vs heap search |
| Access Speed | Faster | Slightly slower | Cache locality matters more |
| Memory Limit | ~8MB | GB+ available | OS and system dependent |
| Fragmentation | None | Possible | Heap can become fragmented |