Developer notes · May 2026

Data Structures & Algorithms

One long page I use as a map: complexity first, then structures and algorithms that keep showing up in real systems, plus patterns and interview habits. Diagrams are inline so I can skim before a session or a design review.

Source: if you pass this file around, link LinhTruong.com so the canonical copy keeps the author line.

Last revised May 2026 · Single-file HTML (print-friendly)

1. Introduction & Why DSA Matters

Data Structures and Algorithms (DSA) means picking the right representation for data and a procedure that matches your constraints. I keep tripping over it—latency budgets, interview loops, and code nobody profiled. The wrong structure or an overly loose bound is how something “fine locally” turns painful under load.

For Interviews

Still the usual signal at big tech and many startups: can you pick a pattern, state complexity, and code clearly.

For Daily Engineering

Picking HashMap vs TreeMap, designing caches, optimizing queries, debugging slow code — all DSA.

For Systems & AI

Databases (B-Trees), Git (DAGs & hashing), Compilers (parse trees), ML (graphs, tensors), Crypto (Merkle trees).

2. Complexity Analysis — The Language of Performance

Before the structures themselves, it helps to agree on Big-O notation: an asymptotic upper bound on how work grows with input size n.

Input size (n) → Operations → O(1) O(log n) O(n) O(n log n) O(n²) O(2ⁿ) Growth Rates Compared
Figure 2.1 — How common Big-O functions scale with input size.
NotationNameExamplen = 1,000,000
O(1)ConstantArray index, hash lookup1 op
O(log n)LogarithmicBinary search, balanced BST~20 ops
O(n)LinearScan, single loop1,000,000 ops
O(n log n)LinearithmicMerge sort, heapsort~20,000,000 ops
O(n²)QuadraticBubble sort, nested loops10¹² ops (infeasible)
O(2ⁿ)ExponentialNaive subset/Fibonaccicosmic — infeasible above n≈30
O(n!)FactorialBrute-force permutationsinfeasible above n≈12

2.1 Big-O, Big-Ω, and Big-Θ

2.2 Amortized Complexity

The average cost per operation across a sequence. A dynamic array's push is O(1) amortized even though occasional resizes are O(n) — the cost is spread over many cheap operations.

Pitfall Hidden constants matter in practice. A "O(n log n)" sort with high constants can be slower than O(n²) for n < 50. This is why std::sort, Python's Timsort, and Java's Dual-Pivot Quicksort use hybrid strategies.

3. The Mental Model of DSA

Every data structure represents a tradeoff. There is no "best" structure — only the best for a specific access pattern.

The Four Universal Operations INSERT Add a new element Hash: O(1) Array end: O(1)* Array front: O(n) Tree (bal): O(log n) Linked list: O(1)† SEARCH Find an element Hash: O(1) Sorted arr: O(log n) Unsorted: O(n) Tree (bal): O(log n) Trie: O(L) DELETE Remove an element Hash: O(1) Linked list: O(1)† Array mid: O(n) Tree (bal): O(log n) Heap top: O(log n) TRAVERSE Visit all elements Always Ω(n) Order matters: In/Pre/Post BFS / DFS Sorted / Random * amortized † given a reference to the node
Figure 3.1 — Every problem boils down to insert, search, delete, traverse.

4. Arrays & Dynamic Arrays

An array is a contiguous block of memory holding fixed-size elements indexed by integer offsets. Because indexing is computed as base + i × sizeof(T), access is O(1). This locality also produces extreme cache efficiency — arrays are usually the fastest structure for sequential workloads despite their humble interface.

Array in Memory 12 8 31 5 27 14 9 [0] [1] [2] [3] [4] [5] [6] 0x1000 0x1004 0x1008 0x100C 0x1010 0x1014 0x1018
Figure 4.1 — Contiguous memory layout enables O(1) random access.
OperationStatic ArrayDynamic Array
Access by indexO(1)O(1)
Search (unsorted)O(n)O(n)
Search (sorted, binary)O(log n)O(log n)
Insert at endO(1) amortized
Insert at indexO(n)O(n)
Delete at indexO(n)O(n)
SpaceO(n)O(n)

4.1 Dynamic Array Growth Strategy

Dynamic arrays (Python list, Java ArrayList, C++ std::vector) double their capacity when full, giving amortized O(1) append:

Total cost of n appends ≤ 1 + 2 + 4 + … + n < 2n ⇒ O(n) total ⇒ O(1) amortized

5. Strings

Strings are arrays of characters but with their own algorithmic universe. Most languages make strings immutable — operations that "modify" return new strings. Use a string builder (StringBuilder / list-then-join) when concatenating in loops to avoid O(n²) behavior.

5.1 Essential String Algorithms

AlgorithmPurposeComplexity
KMP (Knuth–Morris–Pratt)Pattern matching with prefix functionO(n + m)
Rabin–KarpRolling hash; finds patterns or duplicatesO(n + m) average
Z-AlgorithmString matching via Z-arrayO(n + m)
Boyer–MooreSkip characters via bad-char + good-suffix heuristicsSublinear in practice
Manacher's AlgorithmAll palindromesO(n)
Suffix Array / TreeSubstring queriesO(n log n) build
Aho–CorasickMulti-pattern matching (e.g. malware scan)O(n + Σm + matches)

6. Linked Lists

A linked list trades O(1) random access for O(1) insertion/deletion anywhere, given a reference. Each node holds data plus a pointer to the next node (and previous, for doubly linked).

Singly Linked List 12 8 31 5 head tail (null)
Figure 6.1 — Nodes scattered across memory, connected by pointers.

6.1 Classic Linked List Tricks

# Reverse a singly linked list
def reverse(head):
    prev, curr = None, head
    while curr:
        nxt = curr.next
        curr.next = prev
        prev = curr
        curr = nxt
    return prev

7. Stacks — LIFO

Last-In-First-Out. Imagine a stack of plates: you push onto and pop from the top. Used in: function call stacks, undo systems, expression parsing, DFS, backtracking.

Stack — Push / Pop at the Top 42 (top) 17 8 3 (bottom) push pop
Figure 7.1 — A stack visualized.

7.1 Classic Stack Patterns

8. Queues & Deques — FIFO

First-In-First-Out. Used in: BFS, task scheduling, message buffers, OS kernels.

A deque (double-ended queue) allows push/pop from both ends in O(1). A priority queue orders by priority instead of arrival time (usually backed by a heap).

VariantPushPopUse Case
Queue (FIFO)O(1)O(1)BFS, level-order, scheduling
DequeO(1) both endsO(1) both endsSliding window maximum
Priority QueueO(log n)O(log n)Dijkstra, A*, top-K
Circular QueueO(1)O(1)Ring buffers, audio I/O

9. Hash Tables — The Workhorse

Hash tables map keys to values in O(1) average time by transforming keys into array indices through a hash function. They are arguably the most important data structure in everyday engineering — they underpin caches, sets, dictionaries, symbol tables, deduplication, and indexing.

Hash Table with Separate Chaining Key hash(k) % 7 Bucket Chain (linked list) "apple" 3 "banana" 1 "cherry" 3 "date" 5 0 1 2 3 4 5 6 banana apple cherry date ↑ collision!
Figure 9.1 — Two keys hash to the same bucket; chaining stores both in a list.

9.1 Collision Resolution

StrategyHow it worksPros / Cons
Separate ChainingEach bucket = list/tree of entriesSimple; degrades to O(n) with bad hash
Linear ProbingOn collision, scan i+1, i+2, …Cache-friendly; clustering
Quadratic ProbingScan i+1², i+2², …Reduces clustering
Double HashingStep = second hash functionExcellent distribution
Robin Hood HashingMinimize displacement varianceUsed in Rust's HashMap
Cuckoo HashingEach key has 2 candidate slotsO(1) worst-case lookup

9.2 Load Factor & Resizing

When load factor (n / capacity) exceeds a threshold (typically 0.75), the table doubles and re-hashes everything. This is O(n) but amortized O(1) per insert.

Hash Pitfalls
  • Never mutate a key after inserting — it loses its bucket.
  • Custom classes must implement equals() AND hashCode() consistently.
  • Adversarial inputs can cause hash collisions to attack web servers (HashDoS) — modern languages randomize hash seeds.

10. Trees

A tree is a connected, acyclic graph with a designated root. Each node has 0 or more children. Trees model hierarchies (file systems, DOM), enable fast search (BSTs), and back storage engines (B-Trees).

Binary Search Tree (BST) 50 30 70 20 40 80 10 25 45 Invariant: left < node < right
Figure 10.1 — A BST where every left subtree contains smaller values.

10.1 Tree Traversals

TraversalOrderUse
Pre-orderRoot → Left → RightClone tree, prefix expressions
In-orderLeft → Root → RightSorted output on BST
Post-orderLeft → Right → RootDelete tree, postfix expressions
Level-order (BFS)Layer by layerShortest path in unweighted tree

10.2 Balanced Trees

TreeBalance InvariantUsed By
AVL TreeHeights of subtrees differ ≤ 1Memory-database indexes
Red-Black TreeBlack-height invariants; ≤ 2× imbalanceJava TreeMap, Linux CFS scheduler
Splay TreeRecently-accessed nodes near rootCaching workloads
B-Tree / B+TreeMany keys per node (high fan-out)Postgres, MySQL, file systems
TreapBST on keys + heap on random prioritiesRandomized balance
Insight The reason databases use B-Trees (not BSTs) is disk I/O. Each B-Tree node fills a disk page (4–16 KB), so a tree of depth 4 indexes billions of rows in 4 disk reads.

11. Heaps & Priority Queues

A binary heap is a complete binary tree satisfying the heap property: in a min-heap, every parent ≤ its children; in a max-heap, parent ≥ children. Stored as an array; for index i, children are 2i+1, 2i+2; parent is (i-1)/2.

Min-Heap Stored as an Array 1 3 6 5 9 8 7 1 3 6 5 9 8 7 parent(i) = (i-1)/2 · left = 2i+1 · right = 2i+2
Figure 11.1 — Tree shape on top; identical array representation below.
OperationComplexity
Peek (min/max)O(1)
InsertO(log n)
ExtractO(log n)
Build heap from arrayO(n)
HeapsortO(n log n)

11.1 When to Use a Heap

12. Tries (Prefix Trees)

A trie stores strings character-by-character in a tree, sharing common prefixes. Lookup, insert, and prefix queries run in O(L), independent of the number of stored strings.

Trie storing: "cat", "car", "cart", "dog" · c d a a t● r● t● o g● ● = end of word
Figure 12.1 — Sharing prefixes saves space and accelerates queries.

Use Tries For: autocomplete, spell-check, IP routing (radix trie), DNA k-mers, Aho–Corasick multi-pattern matching.

13. Graphs

A graph G = (V, E) is a set of vertices V connected by edges E. Graphs are the most expressive structure in CS — almost any real-world entity-relationship problem reduces to one. Examples: social networks, road maps, dependency resolution, web crawl, neural networks, blockchain.

Adjacency List

0: [1, 2]
1: [0, 3]
2: [0, 3]
3: [1, 2]

Space O(V+E). Best for sparse graphs.

Adjacency Matrix

   0 1 2 3
0 [0 1 1 0]
1 [1 0 0 1]
2 [1 0 0 1]
3 [0 1 1 0]

Space O(V²). O(1) edge lookup. Best for dense graphs.

Weighted Directed Graph A B C D E F 4 2 3 5 1 2 6
Figure 13.1 — Vertices, directed edges, and weights.

14. Union–Find (Disjoint Set Union)

A Union–Find structure maintains a collection of disjoint sets supporting find(x) (which set?) and union(x,y) (merge sets). With path compression + union by rank, every operation runs in nearly O(1) — formally O(α(n)), the inverse Ackermann function.

Used For: Kruskal's MST, network connectivity, dynamic graph problems, "is this a cycle?" checks.

class DSU:
    def __init__(self, n):
        self.parent = list(range(n))
        self.rank   = [0]*n
    def find(self, x):
        while self.parent[x] != x:
            self.parent[x] = self.parent[self.parent[x]]  # path compression
            x = self.parent[x]
        return x
    def union(self, a, b):
        ra, rb = self.find(a), self.find(b)
        if ra == rb: return False
        if self.rank[ra] < self.rank[rb]: ra, rb = rb, ra
        self.parent[rb] = ra
        if self.rank[ra] == self.rank[rb]: self.rank[ra] += 1
        return True

15. Segment Tree & Fenwick Tree (BIT)

Range query structures. A segment tree answers range-sum, range-min, range-gcd, etc. in O(log n) with O(log n) point or range updates. A Fenwick tree (BIT) handles prefix sums with simpler code but only invertible operations.

OperationSegment TreeFenwick Tree
BuildO(n)O(n)
Point UpdateO(log n)O(log n)
Range QueryO(log n)O(log n)
Range Update (lazy)O(log n)O(log n) for additive only
Implementation~80 LOC~15 LOC

16. Sorting Algorithms

AlgorithmBestAverageWorstSpaceStableIn-place
BubbleΩ(n)Θ(n²)O(n²)O(1)
SelectionΩ(n²)Θ(n²)O(n²)O(1)
InsertionΩ(n)Θ(n²)O(n²)O(1)
MergeΩ(n log n)Θ(n log n)O(n log n)O(n)
QuicksortΩ(n log n)Θ(n log n)O(n²)O(log n)
HeapsortΩ(n log n)Θ(n log n)O(n log n)O(1)
CountingΩ(n+k)Θ(n+k)O(n+k)O(k)
RadixΩ(nk)Θ(nk)O(nk)O(n+k)
Timsort (Python/Java)Ω(n)Θ(n log n)O(n log n)O(n)
Introsort (C++)Ω(n log n)Θ(n log n)O(n log n)O(log n)

16.1 The Comparison Lower Bound

Any comparison-based sort requires Ω(n log n) comparisons in the worst case.

This is why counting/radix/bucket sort can beat n log n — they exploit the structure of the keys (integers in a bounded range), not arbitrary comparison.

16.2 Quicksort — The Industrial Workhorse

def quicksort(arr, lo, hi):
    if lo < hi:
        pivot = partition(arr, lo, hi)
        quicksort(arr, lo, pivot - 1)
        quicksort(arr, pivot + 1, hi)

def partition(arr, lo, hi):
    p = arr[hi]; i = lo - 1
    for j in range(lo, hi):
        if arr[j] <= p:
            i += 1; arr[i], arr[j] = arr[j], arr[i]
    arr[i+1], arr[hi] = arr[hi], arr[i+1]
    return i + 1
Choosing a Sort
  • Default: trust your standard library (Timsort, Introsort, Pdqsort) — they're hybrids tuned over decades.
  • Almost-sorted data → Insertion or Timsort (O(n) best case).
  • Bounded integers → Counting / Radix (O(n)).
  • Stability required → Merge sort or Timsort.
  • Constant memory → Heapsort.
  • Massive data, external → External merge sort.

18. Recursion & Backtracking

Recursion is a function that calls itself on smaller instances. Every recursive solution needs:

  1. Base case — the trivial input that returns directly.
  2. Recursive case — break the problem into smaller subproblems.
  3. Progress — guarantee the recursion reaches the base case.

18.1 The Recursion Tree

Fibonacci(5) — Recursion Tree fib(5) fib(4) fib(3) fib(3) fib(2) fib(2) fib(1) fib(2) fib(1) fib(1) fib(0) fib(0) repeated work → memoize / DP!
Figure 18.1 — Naive recursion recomputes the same subproblems; memoization saves O(2ⁿ) → O(n).

18.2 Backtracking Template

def backtrack(state):
    if is_goal(state):
        results.append(copy(state))
        return
    for choice in candidates(state):
        if is_valid(state, choice):
            apply(state, choice)        # make
            backtrack(state)
            undo(state, choice)         # unmake — the heart of backtracking

Classic backtracking problems: N-Queens, Sudoku, permutations, combinations, subset sum, word search, graph coloring.

19. Divide & Conquer

Three steps:

  1. Divide — split into smaller, independent subproblems.
  2. Conquer — solve each recursively.
  3. Combine — merge subsolutions into the final answer.

19.1 The Master Theorem

For T(n) = a·T(n/b) + f(n):

Case 1: f(n) = O(nlogba − ε) ⇒ T(n) = Θ(nlogba)
Case 2: f(n) = Θ(nlogba) ⇒ T(n) = Θ(nlogba log n)
Case 3: f(n) = Ω(nlogba + ε) ⇒ T(n) = Θ(f(n))

Classic D&C algorithms: Merge sort, Quicksort, Karatsuba multiplication, Strassen matrix multiply, FFT, closest pair of points.

20. Dynamic Programming

DP = recursion + memoization. It solves problems with two properties:

  1. Optimal substructure — the optimal solution contains optimal solutions to subproblems.
  2. Overlapping subproblems — the same subproblem appears many times.

20.1 Top-Down vs Bottom-Up

Top-Down (Memoization)

Write the natural recursion, then cache results.

def fib(n, memo={}):
    if n < 2: return n
    if n not in memo:
        memo[n] = fib(n-1) + fib(n-2)
    return memo[n]

Bottom-Up (Tabulation)

Build a table from base cases upward.

def fib(n):
    if n < 2: return n
    a, b = 0, 1
    for _ in range(n - 1):
        a, b = b, a + b
    return b

20.2 The Five Canonical DP Categories

CategoryExample ProblemRecurrence Shape
1-D LinearClimbing stairs, House Robberdp[i] = f(dp[i-1], dp[i-2])
2-D GridUnique paths, Edit distance, LCSdp[i][j] = f(dp[i-1][j], dp[i][j-1], …)
Knapsack0/1 Knapsack, Coin change, Subset sumdp[i][w] = max(skip, take)
IntervalMatrix chain, Burst balloons, Palindrome partitiondp[i][j] = min/max over k
Tree / BitmaskTree DP, TSP via bitmaskdp[node][state] or dp[mask][i]

20.3 DP Strategy in Five Steps

  1. Define the state: what does dp[i][j] mean in plain English?
  2. Write the recurrence: how does the current state relate to smaller states?
  3. Identify base cases.
  4. Determine the iteration order (dependencies must be computed first).
  5. Optimize space: often only the previous row/column is needed.

21. Greedy Algorithms

A greedy algorithm makes the locally optimal choice at each step, hoping it leads to a globally optimal solution. It works only when the problem has the greedy-choice property + optimal substructure.

When greedy works, it's usually faster and simpler than DP. When it doesn't, it produces wrong answers silently — always prove correctness (exchange argument, matroid theory, or induction).

ProblemGreedy Choice
Activity SelectionEarliest finishing time
Huffman CodingMerge two smallest-frequency nodes
Dijkstra's SSSPClosest unvisited vertex
Prim's MSTLightest crossing edge
Kruskal's MSTLightest edge that doesn't form a cycle
Fractional KnapsackHighest value/weight ratio
Jump Game IIFurthest reachable within current step

22. Graph Algorithms

22.1 Traversals: BFS & DFS

BFS — Breadth-First Search

Queue-based. Visits all vertices at distance k before k+1.

Uses: shortest path in unweighted graph, level/grid problems, web crawl.

from collections import deque
def bfs(g, src):
    seen, q = {src}, deque([src])
    while q:
        u = q.popleft()
        for v in g[u]:
            if v not in seen:
                seen.add(v); q.append(v)

DFS — Depth-First Search

Stack/recursion. Goes deep before backtracking.

Uses: cycle detection, topological sort, SCCs (Tarjan/Kosaraju), bridges & articulation points.

def dfs(g, u, seen):
    seen.add(u)
    for v in g[u]:
        if v not in seen:
            dfs(g, v, seen)

22.2 Shortest-Path Algorithms

AlgorithmHandlesComplexityUse Case
BFSUnweightedO(V+E)Hops in a social network
DijkstraNon-negative weightsO((V+E) log V)GPS routing, network latency
Bellman–FordNegative weights, detects negative cyclesO(V·E)Currency arbitrage
Floyd–WarshallAll-pairs shortest pathsO(V³)Small dense graphs
A* searchHeuristic-guidedO(E) with good heuristicPathfinding in games, robotics
Johnson'sAll-pairs with negatives (no negative cycle)O(V·E + V² log V)Sparse graphs

22.3 Minimum Spanning Tree

22.4 Topological Sort

Linear ordering of a DAG such that every edge u→v has u before v. Used in: build systems (Make, Bazel), course prerequisites, dependency resolution (npm, pip).

22.5 Strongly Connected Components & Network Flow

23. Fifteen coding patterns I still drill

Most interview (and plenty of contest) problems still snap to one of these shapes. Naming the shape early beats rewriting from scratch.

1. Two Pointers

Pair sum in sorted array, palindromes, remove duplicates. O(n) instead of O(n²).

2. Sliding Window

Longest substring without repeats, max sum subarray of size K.

3. Fast & Slow Pointers

Cycle detection, find middle, happy number.

4. Merge Intervals

Calendar conflicts, meeting rooms, interval insertion.

5. Cyclic Sort

Find missing/duplicate numbers in 1..n range.

6. In-place Reversal of LL

Reverse sublist, reverse in groups of K.

7. BFS — Tree/Graph

Level-order, shortest unweighted path, zigzag traversal.

8. DFS — Tree/Graph

Path sum, all paths, tree diameter, islands.

9. Two Heaps

Median of a stream, sliding-window median.

10. Subsets / Backtracking

Permutations, combinations, power set, partitioning.

11. Modified Binary Search

Search rotated array, find peak, binary-search on answer.

12. Top-K Elements

K largest, K closest, top-K frequent (heap or quickselect).

13. K-way Merge

Merge K sorted lists, smallest range covering K lists.

14. Topological Sort

Course schedule, alien dictionary, build order.

15. Monotonic Stack / Deque

Next greater element, largest rectangle, sliding window max.

24. Problem-Solving Strategy & Interview Playbook

24.1 The UMPIRE Framework

StepAction
UnderstandRestate the problem. Confirm input ranges, edge cases, output format.
MatchMatch to a known pattern (sliding window? graph? DP?).
PlanOutline the algorithm in plain English before coding.
ImplementCode cleanly. Use helper functions.
ReviewTrace through with a small example.
EvaluateAnalyze time/space complexity. Discuss tradeoffs and follow-ups.

24.2 How to Approach Any New Problem

  1. Restate & clarify — what are inputs, outputs, constraints? n = 10? 10⁵? 10⁹?
  2. Brute force first — get any correct algorithm, even if O(n²) or O(2ⁿ).
  3. Identify the bottleneck — what's the slow step in brute force?
  4. Optimize with a data structure or pattern that attacks the bottleneck.
  5. Verify edge cases: empty input, one element, duplicates, negatives, overflow, max size.
  6. Test with examples: walk through paper or in REPL.

24.3 Constraint-Based Algorithm Hints

Constraint on nLikely ComplexitySuggested Approach
n ≤ 12O(n!)Permutations / brute force
n ≤ 25O(2ⁿ)Subsets, bitmask DP, meet-in-the-middle
n ≤ 100O(n⁴)DP, Floyd–Warshall
n ≤ 500O(n³)DP, matrix chain
n ≤ 5,000O(n²)2-D DP, all pairs
n ≤ 10⁶O(n log n)Sort, heap, segment tree, binary search
n ≤ 10⁸O(n) or O(log n)Linear scan, two pointers, math formula
n > 10⁹O(log n) or O(1)Mathematical insight, closed form

24.4 Interview Communication Checklist

25. Data Structure Picker — Decision Tree

"Which data structure should I use?" What's the access pattern? Key → Value lookup? Hash Map if order doesn't matter TreeMap / BST if sorted order needed Insertion-order matters? Stack / Queue LIFO / FIFO Array / Vector random access Relationships between items? Tree hierarchy / parent-child Graph network / many-to-many Specialized? Match to access pattern: Priority access? → Heap / PQ Prefix queries? → Trie Range queries? → Segment / Fenwick Connectivity? → Union-Find Cache? → LRU (Map+List) Tip: When in doubt — start with a HashMap. Most problems either are or contain a lookup.
Figure 25.1 — Decision tree for picking a data structure.

26. Mastery Roadmap — Beginner to Expert

A 6-Month Mastery Path 1 Foundation Weeks 1–4 Big-O · Arrays Strings · Hashing Two pointers 2 Linear DS Weeks 5–8 LL · Stack Queue · Deque Sliding window 3 Trees & Heaps Weeks 9–12 BST · Heap Recursion Backtracking 4 Graphs Weeks 13–16 BFS · DFS Dijkstra · MST Topo sort · DSU 5 DP Weeks 17–20 1-D · 2-D Knapsack Interval · Bitmask 6 Advanced Weeks 21–24 Segment Tree String algos System design Pace = ~5 problems/week. Goal: 120+ problems with deliberate review across patterns.
Figure 26.1 — Progressive mastery path with weekly milestones.

26.1 Practice Strategy

26.2 Curated Problem Tracks

26.3 Recommended Languages

LanguagePros for DSACons
PythonConcise; built-in sort, heap, dict, setSlower; recursion limit; no real arrays of primitives
C++Fastest; STL is unmatched; bit operationsMore verbose; manual memory considerations
JavaStrong typing; rich collections; common at FAANGVerbose; int overflow gotchas
GoSimple; great concurrency primitivesSmaller standard collection library
RustMemory-safe; modernBorrow checker slows competitive coding

27. References & Further Reading

Narrative and diagrams in §1–§26 are my own synthesis unless noted inline; §27 is where I point people for primary texts and courses. Author: Linh Truong · LinhTruong.com.

27.1 Foundational Textbooks

27.2 Interview-Focused

27.3 Online Resources

27.4 Beyond Interviews