- The Routing Intent by Leonardo Furtado
- Posts
- Declarative Infrastructure in Network Engineering
Declarative Infrastructure in Network Engineering
Why It’s the Foundation of Modern, Scalable, and Reliable Networks

Still logging into devices to make quick changes? That doesn’t scale.
In this article, we explore how declarative infrastructure is transforming network engineering, from hyperscale FAANG environments to enterprises by replacing manual configuration and fragile automation with intent-driven, self-correcting, codified systems.
Learn how to reduce drift, gain observability, and deploy networks like modern software teams ship code.
Learn AI in 5 minutes a day
This is the easiest way for a busy person wanting to learn AI in as little time as possible:
Sign up for The Rundown AI newsletter
They send you 5-minute email updates on the latest AI news and how to use it
You learn how to become 2x more productive by leveraging AI
The Imperative Networking Mindset:
How It Served Us, How It Fails Us, and Why It’s Time to Let Go
Before we had pipelines, reconciliation loops, and intent compilers, we had people.
People who logged into routers, typed in commands, and made the network work, command by command, line by line.
This was imperative networking, and for decades, it got the job done.
What Is Imperative Networking?
Imperative networking is based on telling the device exactly how to achieve a desired outcome, step by step.
Example:
Configure an interface
Assign an IP address
Define BGP neighbors
Apply prefix-lists
Commit changes
There’s no abstraction. No separation of intent. The engineer is both the planner and the executor. Every change depends on the order of operations and the engineer’s ability to get it right every time.
Why It Worked (Until It Didn’t)
Imperative workflows made sense in simpler times:
You had fewer devices
Network designs were static
Outages were recoverable by humans at great operational and financial expense
Tribal knowledge was passed down in config snippets and runbooks
Changes were infrequent, mostly performed during maintenance windows
Back then, knowing the syntax for five different vendors made you a senior engineer. Automation meant copying and pasting from Notepad++ or, nowadays, with Visual Studio Code.
And in small environments, it still broke things. A lot. But we never fixed it.
Why It Breaks Down Everywhere
However, the truth is that imperative models don’t scale. Not at FAANG. Not in banks. Not even in a 20-branch enterprise.
Here’s why:
1. It’s Error-Prone by Design
Each change relies on human memory, perfect syntax, and correct ordering.
One missed line? One command applied to the wrong context? One engineer forgetting to save?
Welcome to:
Broken BGP sessions
Blackholed traffic
Asymmetric routing
Prolonged outages
Automation can reduce mistakes, but imperative automation still suffers from fragility. You’re just running brittle commands faster.
2. It Encourages Tribal Knowledge, Not Shared Understanding
In imperative shops, the person who knows how to fix things becomes “indispensable.”
But that also means:
No one else can safely make changes
Troubleshooting is slow when that person is offline
Documentation gets out of date, or was never written
The result? People, not systems, become the single source of truth. That’s operational risk, plain and simple.
3. It’s Unobservable and Unverifiable
With imperative workflows:
There’s no formal record of what “should” be configured
Drift is invisible until something breaks
You don’t know what changed unless you run diffs manually, if you even can
Auditing requires reverse-engineering command logs or device outputs
This makes compliance, incident response, and root cause analysis painfully slow.
4. It Slows Down the Business
Every change has a human in the loop. That means:
Changes take hours instead of minutes
Approval workflows are based on fear, not risk modeling
Engineers are stuck doing procedural tasks instead of solving meaningful problems
The larger your team or footprint becomes, the more time is wasted on toil, and the harder it becomes to move fast without breaking things.
5. Even Small Enterprises Pay the Price
This isn’t just a hyperscaler problem. A regional retail chain with:
30 stores
3 firewalls per site
2 WAN providers
10 config parameters per device
Assuming things here, but that is about more than 1,800 config items to manage and several thousand configuration lines on average, not counting growth, failovers, or change requests.
Without a declarative model:
Each site gets “close enough” configs
Emergency fixes drift over time
Tech debt accumulates until even basic changes carry risk
And eventually, the business loses confidence in the network team’s ability to deliver safely
The Real Cost of Imperative Networking
It’s not just about syntax. It’s about outcomes:
Lost time
Lost confidence
Lost reliability
Lost team scalability
And in many cases, lost engineers, burned out by pager fatigue, config sprawl, and hero-mode operations.
It’s Time to Move On Without Guilt
Let’s be clear:
Imperative networking got us here. It was the best tool we had for a long time.
But we’ve outgrown it.
Declarative infrastructure doesn’t erase your skill. It amplifies it.
It lets you:
Scale your expertise across systems
Turn tribal knowledge into repeatable code
Replace manual steps with validated outcomes
Focus on architecture, reliability, and velocity
Your job isn’t to SSH and configure, your job is to define, validate, observe, and evolve.
Remarkably, this remains true in over 98% of companies today. The imperative model is what the vast majority of network engineers understand about maintaining computer networks.
This means that acquiring the knowledge and skills discussed in this post, starting now, will provide you with a significant career advantage. Keep reading and get ready!
Why Most "Network Automation" Today Is Still Imperative
Let’s be honest, even though many teams use tools like Ansible, Python scripts, or Rundeck jobs, most of what’s called “automation” today is still imperative in nature.
Why? Because we’re still telling devices how to do something, line by line, but just faster.
Imperative Automation in Disguise
Here's what that often looks like:
An Ansible playbook that defines a sequence of tasks: configure interface, set IP, apply ACLs, all that in strict order.
A Python script that SSHs into devices, runs CLI commands, parses output, and pushes configs.
A Rundeck job that executes shell commands or playbooks on a schedule.
Even though these methods save time and reduce typos, they’re still based on the same old imperative approach:
“You must do X, Y, Z in this order, and I’ll tell you how.”
The Limitations Remain
No true intent modeling: the system doesn’t understand what you're trying to achieve.
No reconciliation: if something drifts, it stays broken until someone notices.
No observability: configs are pushed, not validated in context.
No rollback safety: failures can leave devices in inconsistent states.
It’s like faster CLI. But it still smells like CLI!
What’s the Alternative?
Declarative systems flip the model:
“This is the desired state, you figure out how to get there.”
The system becomes the executor, validator, and reconciler.
You focus on what you want the network to do, not how to type it in.
From CLI to Intent: A Paradigm Shift in Network Engineering
For decades, network engineers have worked in an imperative world. Need to bring up a BGP session? You configure it line by line:
Set the local AS
Configure the neighbor
Attach a route-map
Activate the session per address-family
Commit changes, test, validate manually
As we discussed above, this hands-on, procedural approach defined network engineering from the 90s well into the 2010s. Even when automation tools emerged (like Ansible, Netmiko, or Nornir), most followed the same imperative pattern: “Here’s what to do, and in what order.”
But here’s the problem, again:
Imperative automation doesn’t scale beyond a certain point. It just breaks slower.
In hyperscale environments, where networks consist of hundreds of thousands of devices and millions of configuration elements, this model collapses under its own complexity. The sheer scale introduces challenges like:
Configuration inconsistencies across regions and data centers
Exponential growth in troubleshooting complexity
Change management that becomes humanly impossible to track
When dealing with such massive infrastructure, even a minor manual error can lead to significant service disruptions. Traditional imperative approaches simply can't provide the reliability and predictability needed at this scale.
However, as I mentioned earlier, this issue affects all types of organizations, regardless of their size and scaling requirements. All organizations can and will benefit from moving away from this imperative model.
Enter Declarative Infrastructure
Declarative infrastructure represents a foundational shift in how networks are built and managed.
Instead of telling the network how to achieve a state, you tell it what the desired state is and let the system figure out the how.
For example:
❌ Imperative:
“Set interface Gi0/1 to up, assign IP, set description, configure BGP peer, apply prefix-list…”
✅ Declarative:
“Interface Gi0/1 should be up, have this IP, peer with this AS, and accept these prefixes.”
The underlying system compares this intent to the current device state, calculates the delta, and applies only what’s needed to reconcile reality with your desired outcome.

Subscribe to our premium content to read the rest.
Become a paying subscriber to get access to this post and other subscriber-only content. No fluff. No marketing slides. Just real engineering, deep insights, and the career momentum you’ve been looking for.
Already a paying subscriber? Sign In.
A subscription gets you:
- • ✅ Exclusive career tools and job prep guidance
- • ✅ Unfiltered breakdowns of protocols, automation, and architecture
- • ✅ Real-world lab scenarios and how to solve them
- • ✅ Hands-on deep dives with annotated configs and diagrams
- • ✅ Priority AMA access — ask me anything