Concurrency vs Parallelism: The Restaurant Kitchen Guide to How Software Actually Works

There’s a quiet revolution happening in how we build software, and the best way to understand it is to imagine a busy kitchen.
One Chef, One Pot
In the early days (and still today in languages like C) programmers managed memory by hand. You’d call malloc() to grab some memory, use it, then call free() when you were done. Forget that free() call? Memory leak. Call it twice? Crash. It’s like a chef who has to personally wash, track, and put away every single pan the moment they’re done with it, mid-service.
It works. It’s fast. It’s also exhausting and error-prone.
Let the Dishwasher Handle It
Then came garbage collection. Languages like Python, JavaScript, and Go said: what if we just… handled that for you? A background process watches your memory, notices when something’s no longer needed, and cleans it up automatically.
Our chef can finally just cook. Dirty pan? Set it aside. Someone else deals with it.
This was the first big shift from human-managed to machine-managed complexity.
The Busy Kitchen
But modern software isn’t one chef with one pot. It’s a restaurant at peak dinner rush. Multiple things need to happen at once.
Here’s where people get confused: concurrent and parallel aren’t the same thing.
Concurrent is one chef managing three pots on a stove. Stir the risotto, check the sauce, flip the steak, back to the risotto. They’re juggling multiple tasks, switching rapidly between them. Nothing’s literally simultaneous, but everything’s making progress.
Parallel is three chefs, each at their own station, all cooking at the same time. Actual simultaneous work.
A real kitchen (like a real computer) does both. Four chefs (four CPU cores) working in parallel, each one concurrently managing their own set of tasks.
The Expediter Doesn’t Know Your Recipe
Here’s something beautiful about how this works: the scheduler (the system that decides which task runs when) doesn’t know or care what’s inside each task.
Think of the expediter calling orders. They see: “Table 12 is waiting, priority high, needs attention.” They don’t know if it’s a burger or a bouillabaisse. They just know it needs to move.
The scheduler sees process IDs, states (ready, running, waiting), and priorities. The actual work inside? Black box. This separation is what lets the system scale. The coordination layer stays simple even as the work gets complex.
No Reaching Into Someone Else’s Station
Traditional multi-threaded programming is chaos. Multiple threads sharing the same memory, stepping on each other’s variables, requiring elaborate locking mechanisms to prevent race conditions. It’s like four chefs all reaching into the same pot, hoping they don’t collide.
Erlang (a language built for telecom switches that need to run for years without crashing) took a different approach: message passing.
Each chef has their own station. Their own workspace. Nobody reaches into anyone else’s space. When the grill chef finishes the steak, they don’t walk it over and drop it on someone’s cutting board. They send it (pass a message) to the plating station. Clean handoff. No collision possible.
The sauté chef needs prepped vegetables? A message arrives with exactly what they need. The dessert station needs to know when to fire? They’re listening for that message.
Millions of tiny processes, each isolated, coordinating through messages. No shared memory. No locks. No race conditions.
The Progression
Look at the arc:
- malloc: Humans micromanaging every detail
- Garbage collection: Automated memory management
- Message passing: Automated concurrency and fault tolerance
Each step removes tedious complexity from human shoulders and lets the machine handle it. Not because humans can’t manage it, but because they shouldn’t have to.
Why This Matters Now
We’re entering an era of AI-assisted and AI-orchestrated development. Systems where humans define architecture and outcomes while machines handle implementation details.
This isn’t new. It’s the same trajectory we’ve been on since we stopped managing memory by hand. The question was never “human vs. machine.” It was always “what should each be responsible for?”
The best systems have always been the ones that let humans think about what needs to happen while machines figure out how to make it happen reliably, concurrently, at scale.
The kitchen runs smoothest when the chef can focus on the food.
Next time your computer seems to be doing twelve things at once, picture that kitchen: parallel stations, concurrent task-juggling, an expediter who only knows tickets not recipes, and messages flying between stations. Nobody reaching into anyone else’s space.