52 min read

Windows Filtering Platform: The Kernel-Mode Firewall You Don't See

The Windows Filtering Platform is the kernel-mode engine under wf.msc, IPsec, WinNAT, the Hyper-V vSwitch, and every modern Windows EDR.

Permalink

Open wf.msc. Right-click "Inbound Rules," click "New Rule," fill in the form, click OK. You think you just configured a firewall. What you actually did was register one filter, inside one sublayer, at one of roughly sixty filtering layers in the kernel-mode classification path of a platform you have never named. The same platform is also running IPsec, container networking, Microsoft Defender for Endpoint's network protection, and every third-party EDR's network-telemetry pipeline on the Windows host you are using right now.

1. You Just Clicked OK on Sixty Filtering Layers

The firewall UI is the visible one percent of WFP. Almost every modern Windows network-security feature is a configuration of the same engine.

That is the central claim of this article, and it is the kind of statement that sounds like marketing until you trace the actual wires. Trace them once and you stop seeing "Windows Defender Firewall" and "IPsec" and "Windows containers" as separate products. They are all clients of the same kernel/user-mode service, configuring the same filter engine, arbitrated by the same Base Filtering Engine, classified across the same approximately sixty FWPM_LAYER_* identifiers [1].

Windows Filtering Platform (WFP)

Microsoft's cross-mode network-traffic filtering service introduced in Windows Vista and Windows Server 2008. WFP "is designed to replace previous packet filtering technologies such as Transport Driver Interface (TDI) filters, Network Driver Interface Specification (NDIS) filters, and Winsock Layered Service Providers (LSP)" [2]. The platform has five components: the Filter Engine, the Base Filtering Engine, a set of kernel-mode shims, callout drivers, and the management API [3].

Base Filtering Engine (BFE)

A Windows service named bfe that, in Microsoft's own words, "controls the operation of the Windows Filtering Platform" and "plumbs configuration settings to other modules in the system. For example, IPsec negotiation polices go to IKE/AuthIP keying modules, filters go to the filter engine" [3]. The BFE is not the Windows Firewall. The Windows Firewall is a separate service (MpsSvc) that talks to the BFE.

The naming is the first thing that trips readers. There is a service called BFE and a service called MpsSvc. They live in different rows of Get-Service output. They have different binary backings. The dependency arrow runs one way: MpsSvc requires BFE, never the other direction. That asymmetry, which seems pedantic, turns out to be load-bearing for the rest of the story. WFP is the platform. The firewall is a tenant.

The firewall UI is the visible one percent of WFP. Almost every modern Windows network-security feature -- Windows Defender Firewall with Advanced Security, Windows IPsec, WinNAT and container networking, the Hyper-V Extensible Switch, Microsoft Defender for Endpoint Network Protection, every third-party EDR with a network filter -- is a configuration of the same engine [4].

If WFP is the engine, what was there before it? Why did Microsoft need to build a platform when Windows XP SP2 had already shipped a firewall?

2. Before WFP -- An Internet on Fire

April 2004. Sasser is propagating through the LSASS RPC interface on port 445, infecting unpatched Windows machines within minutes of their first cable plug. Microsoft has just shipped Windows XP SP2, with the Internet Connection Firewall rebranded as "Windows Firewall" and turned on by default for the first time [5]. Wikipedia notes that "the ongoing prevalence of these worms through 2004 resulted in unpatched machines being infected within a matter of minutes," and that Microsoft "switched it on by default since Windows XP SP2." XP SP2 reached general availability on August 25, 2004 [5]. That fixed the worm problem. It did not fix the plumbing problem.

The plumbing problem was that third-party security vendors were already hooking the Windows network stack at four different, mutually incompatible places, none of which arbitrated with the others. ZoneAlarm, Norton Internet Security, McAfee, Kerio, Check Point, BlackICE, and a dozen others were shipping kernel drivers that bolted onto Windows wherever they could find a callable surface [5][4]. They picked four families.

Network Driver Interface Specification (NDIS) intermediate drivers. NDIS 5.x exposed a profile called the intermediate driver that sat below the protocol stack and above the miniport. A vendor could install a driver that saw every Ethernet frame on the way up and every IP packet on the way down. The price was complexity: NDIS intermediate drivers had to participate in the entire NDIS binding state machine, and Microsoft's own documentation later admitted that the model was painful enough that the platform team replaced it with the much simpler NDIS Lightweight Filter (LWF) in NDIS 6.0 [6].

Filter-hook drivers on \Device\Ipfilterdriver. The IP filter driver exposed a single IOCTL, IOCTL_PF_SET_EXTENSION_POINTER, that registered a single callback function the kernel would invoke on every received or transmitted IP packet [7]. There was one callback pointer per machine. IPv4 only. Network layer only. No documented contract for what happened when a second vendor registered.

Winsock Layered Service Providers (LSPs). A user-mode shim chained into every Winsock application, in process. LSPs had access to per-application context, but their cost was paid in blast radius: Microsoft's own categorisation guide warned that "certain system critical processes such as winlogon and lsass create sockets" and that "a number of cases have also been documented where buggy LSPs can cause lsass.exe to crash. If lsass crashes, the system forces a shutdown" [8].

Winsock Layered Service Provider (LSP)

A user-mode DLL that chains into the Winsock service-provider stack of every process that opens a socket. LSPs were the Windows mechanism for content inspection and per-application network rules before Vista. They are still installable, but Microsoft's documentation now categorises which processes must not load them because of the lsass-crash failure mode [8].

TDI filter drivers. The Transport Driver Interface, the legacy kernel interface above TCP/IP, supported a filter-driver pattern that preserved application identity and could veto connections at the transport. It was the cleanest of the four options. It also stopped being a viable target the moment Microsoft deprecated TDI in Vista: "The TDI feature is deprecated and will be removed in future versions of Microsoft Windows. Depending on how you use TDI, use either the Winsock Kernel (WSK) or Windows Filtering Platform (WFP)" [9].

Four hooks, four failure modes, no arbitration between any of them. In May 2006 Madhurima Pawar and Eric Stenson of Windows Networking walked the WinHEC audience through one number that captured the consequence: firewall and antivirus conflicts accounted for 12 percent of all Windows operating-system crashes [10].

"Reduces firewall and anti-virus crashes -- 12% of all OS crashes." -- Madhurima Pawar and Eric Stenson, WinHEC 2006 [10]

That is the design motivation for WFP in twelve words. The XP-era hook zoo was not a security architecture; it was a steady source of bluescreens. Microsoft's documentation reads, looking back at the era from Vista: "Starting in Windows Server 2008 and Windows Vista, the firewall hook and the filter hook drivers are not available; applications that were using these drivers should use WFP instead" [2]. As Forshaw later summarised it, "these firewalls were implemented by hooking into Network Driver Interface Specification (NDIS) drivers or implementing user-mode Winsock Service Providers but this was complex and error prone" [4].

Ctrl + scroll to zoom
The four pre-WFP extensibility points and where they sat in the Windows network stack

So why didn't Microsoft just fix the hooks? Why a whole new platform?

3. Why Four Hooks Could Not Be Saved

Picture a Windows XP machine in 2005, four months past SP2. The user, doing what users do, installs two antivirus suites: one from a free trial that came with the laptop, one from work. Each ships a kernel driver. Each one calls IOCTL_PF_SET_EXTENSION_POINTER on \Device\Ipfilterdriver to register a packet-inspection callback [7]. An hour later the machine bluescreens during a Windows Update download.

The Microsoft documentation for the IOCTL is precise about what the call does ("registers filter-hook callback functions to the IP filter driver to inform the IP filter driver to call those filter hook callbacks for every IP packet that is received or transmitted") and silent about what happens if a second driver makes the same call before the first one unregisters [7]. The page does not document chaining semantics. There is no mention of a registration list, a callback array, a refcount, or a priority. The driver writers got to invent that themselves, separately, in shipped products. The crash reports speak for the result.

Each of the four pre-WFP hooks had a specific architectural flaw. Together those flaws define what WFP had to be.

Filter-hook (IpFilterDriver). One callback pointer per machine; no arbitration; IPv4 only; network layer only. Two security products fight over one callback, and there is no documented way to chain them. Failure: arbitration impossible, vendor coexistence accidental.

NDIS 5.x intermediate driver. High complexity, no application identity (it sees frames, not processes), install-order-dependent binding chains. Microsoft's own assessment of the model, written for the LWF replacement that came in 2006, is: "Filter drivers are easier to implement and have less processing overhead than NDIS intermediate drivers" [6]. Failure: too low for app-aware policy, too painful to write.

TDI filter. Preserved application identity. Vetoed connections at the transport boundary. Architecturally the cleanest of the four. Then Microsoft deprecated TDI in Vista [9] and the substrate evaporated. Failure: the floor disappeared.

Winsock LSP. In-process. User mode. Bypassable by any program that called Nt* system services directly. And, as the Microsoft categorisation page documents, a buggy LSP that crashes LSASS will take down the entire machine [8]. Failure: in process, bypassable, lethal when buggy.

Pre-WFP hookLayerApp identityMulti-vendorFailure modeSuccessor
Filter-hook (IpFilterDriver)Network (L3)NoNo documented contract for chainingArbitration impossible [7]WFP filter at INBOUND_IPPACKET_*
NDIS 5.x intermediateData link (L2)NoInstall-order dependentToo low for app-aware rules; complex [6]NDIS Lightweight Filter (LWF)
TDI filterTransport (L4)YesYes (chainable)Substrate deprecated in Vista [9]WFP ALE + Winsock Kernel (WSK)
Winsock LSPAbove sockets (user mode)YesChainable in-processIn-process bypass; lsass blast radius [8]WFP ALE; LSP retained for non-security uses

Walk those failure modes column by column and a design constraint set falls out. Whatever Microsoft was going to build had to:

  1. Arbitrate multiple vendors deterministically. No more "first IOCTL wins."
  2. Carry application identity through to the inspection point.
  3. Concentrate inspection at one platform, not four.
  4. Run out of process where possible. A buggy callout cannot be allowed to take down LSASS.
  5. Resolve conflicts predictably, with rules a third-party developer can read and design against.
Ctrl + scroll to zoom
Why filter-hook fails: one callback pointer, two security products, no documented chaining

Vista shipped November 2006. What did the architects build to satisfy all five constraints at once?

4. The Evolution -- Five Generations of WFP

May 23-25, 2006, Seattle. Madhurima Pawar, Program Manager in Windows Networking, and Eric Stenson, Development Lead in Windows Networking, stand in front of a hostile room of third-party firewall ISVs at WinHEC and present "Windows Filtering Platform And Winsock Kernel: Next-Generation Kernel Networking APIs." Slide 6 carries the design motivation that this article opened on: 12 percent of all OS crashes are firewall and AV conflicts. Slide 7 carries the architecture diagram [10]. Six months later Vista shipped, with the filter-hook and firewall-hook drivers gone from the system and a new platform in their place [2]. Windows Vista was released to manufacturing on November 8, 2006, and made generally available to consumers on January 30, 2007 [11].

Generation 1: WFP v1 in Vista and Server 2008

WFP v1 introduced five named components. They are still the components the platform ships today. Microsoft's own "About Windows Filtering Platform" page enumerates them: the Filter Engine ("the core multi-layer filtering infrastructure, hosted in both kernel-mode and user-mode"); the Base Filtering Engine ("a service that controls the operation of the Windows Filtering Platform"); shims ("kernel-mode components that reside between the kernel-mode network stack and the filter engine"); callout drivers; and the management API [3].

Filter Engine

The core of WFP. Microsoft's WDK reference defines it as "a component of the Windows Filtering Platform that stores filters and performs filter arbitration. Filters are added to the filter engine at designated filtering layers so that the filter engine can perform the desired filtering action (permit, drop, or a callout). If a filter in the filter engine specifies a callout for the filter's action, the filter engine calls the callout's classifyFn function" [12]. The engine is hosted in both kernel mode and user mode; its kernel classification path runs primarily inside NETIO.SYS [4].

Shim

A kernel-mode bridge between a specific network stack module and the WFP filter engine. Vista shipped six shims: the Application Layer Enforcement (ALE) shim, the Transport Layer Module shim, the Network Layer Module shim, the ICMP Error shim, the Discard shim, and the Stream shim [3]. Each shim invokes the filter engine at one or more FWPM_LAYER_* identifiers when traffic crosses it.

The most consequential of those six shims is ALE.

Application Layer Enforcement (ALE)

"A set of Windows Filtering Platform (WFP) kernel-mode layers that are used for stateful filtering" [13]. ALE keeps per-connection state across packets, and -- this is the line that separates ALE from the rest of the platform -- "ALE layers are the only WFP layers where network traffic can be filtered based on the application identity -- using a normalized file name -- and based on the user identity -- using a security descriptor" [13]. ALE is why per-application firewall rules became possible in 2006. It is also the layer that classifies AppContainer connections in modern Windows.

ALE pays for stateful filtering with bandwidth, not latency. The Microsoft Learn page makes the performance claim explicit: at ALE layers, the platform "minimally impacts network performance by processing only the first packet in a connection" [3]. Subsequent packets ride the existing flow state. That choice is what lets a per-process firewall rule scale to gigabit network rates.

Generation 2: WFP v2 in Windows 8 and Server 2012

Windows 8 and Server 2012 shipped a refresh in 2012. The "What's New in Windows Filtering Platform" page enumerates the delta in four bullets [15]:

"Layer 2 filtering: Provides access to the L2 (MAC) layer, allowing filtering of traffic at that layer. vSwitch filtering: Allows packets traversing a vSwitch to be inspected and/or modified. WFP filters or callouts can be used at the vSwitch ingress and egress. App container management: Allows access to information about app containers and network isolation connectivity issues. IPsec updates: Extended IPsec functionality including connection state monitoring, certificate selection, and key management." [15]

Four features, but the second one -- vSwitch filtering -- is the architecturally significant one. With Windows 8, WFP slid under the Hyper-V Extensible Switch. From that release forward, every Hyper-V VM's packet path is a WFP-extensible classification problem, and the same kernel-mode platform that filters host traffic also filters tenant traffic [15].

Generation 3: Windows 10 ALE redirection (2015-2021)

The Windows 10 family added two ALE layers that did not exist in Vista: CONNECT_REDIRECT and BIND_REDIRECT. The "ALE Layers" page lists them at the bottom of its enumeration [16]. Their job is exactly what their names say -- redirect an outbound connection (proxy it through a different address), or redirect a bind (force a process to bind to a different local endpoint). Web proxies, transparent forwarders, and AppContainer policy now had a kernel-side hook that did not exist before. Forshaw's 2021 Project Zero post documents how the modern Windows Defender Firewall pipeline runs through these layers end-to-end: "MPSSVC converts its ruleset to the lower-level WFP firewall filters and sends them over RPC to the Base Filtering Engine (BFE) service. These filters are then uploaded to the TCP/IP driver (TCPIP.SYS) in the kernel... The evaluation is handled primarily by the NETIO driver as well as registered callout drivers" [4].

Generation 4: URO and the CVE drumbeat (2022-2024)

The most recent generation comes in two parallel tracks. The first is a hardware offload feature. NDIS 6.89, the version of the NDIS driver interface that "is included in Windows 11, version 24H2 and Windows Server 2022 and later," adds support for UDP Receive Segment Coalescing Offload, "this hardware offload enables NICs to coalesce UDP receive segments. NICs can combine UDP datagrams from the same flow that match a set of rules into a logically contiguous buffer. These combined datagrams are then indicated to the Windows networking stack as a single large packet" [17]. Windows 11 24H2 reached general availability on October 1, 2024 [18].

The second track is a sequence of elevation-of-privilege CVEs in the Base Filtering Engine. CVE-2023-29368, published June 14, 2023, is a CWE-415 double-free with a CVSS base of 7.0 [19]. CVE-2024-38034, published July 9, 2024, is a CWE-190 integer overflow with a CVSS base of 7.8 [20]. The 2024 vulnerability's attack-complexity sub-score dropped from AC:H (high) in 2023 to AC:L (low) in 2024. The exploitability sub-score rose from 1.0 to 1.8 over the same interval [19][20]. The trend line is that BFE EoP is getting easier to weaponise, not harder.

Ctrl + scroll to zoom
WFP component architecture: filter engine, BFE, shims, callouts, and the user-mode API
Ctrl + scroll to zoom
WFP generations: Vista to Windows 11 24H2 (timeline source citations follow in adjacent prose)

Timeline sources, in row order: WinHEC 2006 and the Vista release on the Microsoft Learn WFP start page [10][2]; KB981889 [14]; the "What's New" page [15]; ALE Layers [16] and Forshaw 2021 [4]; the NVD records for CVE-2023-29368 and CVE-2024-38034 [19][20]; NDIS 6.89 introduction and the Windows 11 24H2 GA date [17][18].

Five generations, one engine, no replacements. Why does the same engine still ship in 2026? What is the architectural insight that made it last?

5. Sublayers, Weights, and Veto -- The Arbitration Insight

Here is the question every Windows administrator has wondered: how do two competing security products coexist on the same machine without crashing each other? Before Vista the honest answer was, "they didn't, mostly, and when they did it was an accident." After Vista the honest answer is, "WFP arbitrates them deterministically." The mechanism is the load-bearing piece of the platform, and it is built out of two ideas.

Idea 1: Sublayers and weights

Microsoft's "Filter Arbitration" page describes the algorithm in two sentences that almost no Windows administrator has read:

"Each filter layer is divided into sub-layers ordered by priority (also called weight). Network traffic traverses sub-layers from the highest priority to the lowest priority... Within each sub-layer, filters are ordered by weight. Network traffic is indicated to matching filters from highest weight to lowest weight." [21]

A layer (say, FWPM_LAYER_ALE_AUTH_CONNECT_V4, the place where outbound IPv4 TCP connection authorization is decided) contains an ordered list of sublayers. Each sublayer contains an ordered list of filters. Sublayer priority orders the sublayers. Filter weight orders the filters within a sublayer. Network traffic walks the structure top-down, sublayer by sublayer, filter by filter, until a terminal action is reached.

Sublayer

A named, priority-ordered subdivision of a WFP filtering layer. Each sublayer owns a list of filters and has its own GUID. Microsoft's recommendation, in the filter-weight documentation, is that independent vendors "create their own sublayer by using FwpmSubLayerAdd0" rather than register filters into another vendor's sublayer [22]. Sublayer priority is what lets two vendors coexist without interfering.

Filter Weight

A 64-bit value attached to a filter that orders evaluation within a sublayer. The "Filter Weight Assignment" page documents three legal assignment styles: "Set the weight to an FWP_UINT64. BFE uses the supplied weight as is. Set the weight to FWP_EMPTY. BFE automatically generates a weight in the range [0, 2^60). Set the weight to an FWP_UINT8 in the range [0, 15]. BFE uses the supplied weight as a weight range identifier" [22]. Sixteen high-order weight ranges, [0,260)[0, 2^{60}) within each, give vendors a way to carve out non-overlapping neighbourhoods.

The mathematical model is simpler than the prose suggests. Filter weight is an element of [0,264)[0, 2^{64}). A filter at weight w1w_1 runs before a filter at weight w2w_2 inside the same sublayer if w1>w2w_1 > w_2. Sublayer priority orders the sublayers themselves. When a vendor registers its sublayer at, say, priority 0x1000 and chooses filters in the weight range [260,261)[2^{60}, 2^{61}), that vendor has a deterministic neighbourhood that no other vendor will trample, provided the other vendors follow Microsoft's recommendation to call FwpmSubLayerAdd0 and use their own sublayer. The 16-range partitioning via FWP_UINT8 weights is the mechanism that the platform team baked in to give vendors a coordination protocol without requiring vendors to talk to each other. Microsoft Learn's recommendation, verbatim: "This issue can be prevented by having callouts create their own sublayer by using FwpmSubLayerAdd0" [22].

Idea 2: Block-overrides-Permit with Veto

Filter arbitration is actually two passes, not one. Within a single sublayer, the engine evaluates the filters that match in weight order from highest to lowest, and stops at the first filter that returns Permit or Block. That first matching filter wins; lower-weight filters in the same sublayer never run. The engine then performs the same pass on the next sublayer down. Once every sublayer has produced a verdict, the BFE composes those per-sublayer verdicts into one per-layer decision -- and that is where Block-over-Permit and the soft/hard override flag come in. Filter Arbitration states the second pass:

"'Block' overrides 'Permit'. 'Block' is final (cannot be overridden) and stops the evaluation. The packet is discarded." [21]

"Block" and "Permit" each come in two variants. The variant is set by a per-action flag, FWPS_RIGHT_ACTION_WRITE, in the callout's classify-output structure: "If the flag is set, it indicates that the action can be overridden. If the flag is absent, the action cannot be overridden" [21]. The four-cell table below is the override-policy table the BFE uses to compose per-sublayer verdicts into one layer-level action.

ActionOverride allowed?Common nameWhat it means
Permit + FWPS_RIGHT_ACTION_WRITEYesSoft permitA lower-priority sublayer's verdict (composed later by the BFE) may overturn it [21]
Permit, flag absentNoHard permitFinal permit; only a callout Veto in another sublayer can block. [21]
Block + FWPS_RIGHT_ACTION_WRITEYesSoft blockA lower-priority sublayer may overturn it, but Block-over-Permit still applies if no override fires [21]
Block, flag absentNoHard blockFinal block. Evaluation stops. Packet discarded. [21]

The soft/hard distinction is therefore a cross-sublayer property, not a within-sublayer one. Within a sublayer the rule is "first match wins"; only the composition step between sublayers consults the override flag.

There is a fifth case. A callout that returns FWP_ACTION_BLOCK while it could have returned FWP_ACTION_PERMIT is exercising what the documentation calls a Veto. The callout has been given the opportunity to authorize a packet and has refused. That is how a third-party EDR's deep-inspection callout can refuse a flow that an in-box filter has already soft-permitted, without ever knowing the soft-permit happened: the engine offers the packet, the callout says no, and the no is final.

Ctrl + scroll to zoom
Cross-sublayer arbitration: how soft/hard verdicts from each sublayer compose into one layer-level decision

Walk a worked example. An AppContainer process (an Edge tab, say, or any process launched with CreateProcess and an AppContainer SID token) tries to open an outbound TCP connection to 203.0.113.5:443. The Windows TCP/IP stack invokes the ALE shim, which classifies the connection request at FWPM_LAYER_ALE_AUTH_CONNECT_V4. The filter engine walks the sublayers at that layer from highest priority to lowest. Within each sublayer, filters fire highest-weight-first, and the first matching Permit or Block ends evaluation in that sublayer. If a vendor EDR has placed a Veto-style deep-inspection callout in its own sublayer, the callout runs and can deny the connection regardless of what any other sublayer would have done. If no filter explicitly permits the AppContainer with the matching capability SID (internetClient, internetClientServer, or privateNetworkClientServer), the "Block Outbound Default Rule" filter in the firewall's default sublayer fires last and the connection is denied [4].

JavaScript Cross-sublayer arbitration (faithful pseudocode for one layer's sublayer chain)
// Faithful translation of the Microsoft Learn "Filter Arbitration" algorithm
// for the cross-sublayer composition pass. The within-sublayer pass (not
// shown) returns one verdict per sublayer using a first-match-wins rule on
// weight-ordered filters. This function composes those per-sublayer verdicts
// into the layer-level action using FWPS_RIGHT_ACTION_WRITE semantics.
// Source: https://learn.microsoft.com/en-us/windows/win32/fwp/filter-arbitration

const SOFT_PERMIT = { action: 'Permit', override: true  };
const HARD_PERMIT = { action: 'Permit', override: false };
const SOFT_BLOCK  = { action: 'Block',  override: true  };
const HARD_BLOCK  = { action: 'Block',  override: false };

// Each element is the winning verdict from one sublayer, ordered by sublayer
// priority from highest to lowest.
const sublayerVerdicts = [
// Vendor EDR deep-inspection callout, hard block on a known-bad destination
{ sublayer: 'EDR-veto',     priority: 100n, match: (pkt) => pkt.dst === '203.0.113.5',
                            verdict: () => HARD_BLOCK },
// Windows Defender Firewall app rule, allow-with-override
{ sublayer: 'WDF-allow',    priority:  50n, match: () => true,
                            verdict: () => SOFT_PERMIT },
// Block Outbound Default Rule (BFE default sublayer)
{ sublayer: 'block-default',priority:  10n, match: () => true,
                            verdict: () => HARD_BLOCK },
];

function composeAcrossSublayers(packet, sublayers) {
// Higher priority composes first
const ordered = [...sublayers].sort((a, b) => Number(b.priority - a.priority));
let tentative = null;
for (const s of ordered) {
  if (!s.match(packet)) continue;
  const v = s.verdict();
  if (!v.override) {
    // Hard action: final, composition stops
    return { decision: v.action, by: s.sublayer };
  }
  // Soft action: remember, but keep composing in case a lower-priority
  // sublayer issues a hard verdict or a Block (Block overrides Permit).
  if (tentative === null || v.action === 'Block') {
    tentative = { decision: v.action, by: s.sublayer };
  }
}
return tentative ?? { decision: 'Permit', by: 'no-match-default' };
}

console.log(composeAcrossSublayers({ dst: '203.0.113.5' }, sublayerVerdicts));
// -> { decision: 'Block', by: 'EDR-veto' }  (hard block at priority 100)

console.log(composeAcrossSublayers({ dst: '198.51.100.7' }, sublayerVerdicts));
// -> { decision: 'Block', by: 'block-default' } (soft permit overridden by hard block)

Press Run to execute.

Two competing Windows security products coexist on the same host because each one owns its own sublayer, with its own weight neighbourhood. Within a sublayer the BFE picks one winner using "first matching Permit or Block stops evaluation." Across sublayers the BFE composes those winners using "Block overrides Permit, hard actions are final, soft actions can be overridden." Pre-Vista, Windows had filters. Post-Vista, Windows has arbitration.

The engine arbitrates filters deterministically and separates condition-match (the filter) from action (the callout). What does the modern surface look like, in 2026, with two decades of features bolted on top?

6. The Modern WFP Surface

It is 2026. WFP is twenty years old, has never been replaced, and ships under more components than any other Windows networking primitive. Here is what it looks like today.

The filter engine and its kernel client

The filter engine is the same architectural piece WFP v1 shipped with: a cross-mode classifier whose kernel-mode classification path runs primarily inside NETIO.SYS and whose user-mode side runs inside the Base Filtering Engine service host process [23][4]. Callouts and filter consumers do not link against NETIO.SYS. They link against a different binary.

fwpkclnt.sys (kernel-mode WFP client / export driver)

The kernel-mode WFP client and export driver. Callout drivers and other kernel components link against fwpkclnt.lib, whose in-memory module is fwpkclnt.sys [23]. The driver is the API surface that callouts use to register, classify, and call back into the engine. The classification path itself, where filters are matched and actions chosen, runs primarily in NETIO.SYS. The shorthand "fwpkclnt.sys is the filter engine" is common in blog posts and incorrect; the two binaries do different jobs.

The BFE-vs-MpsSvc split is the second confusion to clear. bfe is the Base Filtering Engine, the platform service [3]. MpsSvc is the Windows Defender Firewall service, one consumer of the platform. The dependency goes one way: MpsSvc depends on bfe; bfe does not depend on MpsSvc. You can verify the dependency direction on any running Windows box. Get-Service bfe, Get-Service mpssvc, then Get-Service mpssvc | Select-Object -ExpandProperty ServicesDependedOn will list BFE (among others); the reverse query on bfe lists no dependency on mpssvc. Forshaw's 2021 post documents the same arrow from the policy side: "MPSSVC converts its ruleset to the lower-level WFP firewall filters and sends them over RPC to the Base Filtering Engine (BFE) service" [4].

Roughly sixty filtering layers

Microsoft's "Management Filtering Layer Identifiers" reference enumerates about sixty FWPM_LAYER_* GUIDs, organised by shim, direction (inbound, outbound, forward), stage (pre-IPsec, post-IPsec, discard), and IP version (v4 / v6) [1]. The reference page is dense, but reading it once teaches the structure. A small sample of representative layers:

  • FWPM_LAYER_INBOUND_IPPACKET_V4 and _V6. "Located in the receive path just after the IP header of a received packet has been parsed but before any IP header processing takes place. No IPsec decryption or reassembly has occurred" [1]. The earliest visibility a callout has into a received packet.
  • FWPM_LAYER_OUTBOUND_IPPACKET_V4 and _V6. The send-path twin.
  • FWPM_LAYER_IPFORWARD_V4 and _V6. The routing-decision point on a forwarding host [1].
  • FWPM_LAYER_INBOUND_TRANSPORT_V4 and _V6. After the TCP/UDP/ICMP header has been parsed but before payload delivery [1].
  • FWPM_LAYER_STREAM_V4 and _V6. The TCP stream layer where reassembled byte streams are visible [1].
  • FWPM_LAYER_DATAGRAM_DATA_V4 and _V6. Connectionless data delivery (UDP / ICMP) [1].
  • FWPM_LAYER_INBOUND_MAC_FRAME_ETHERNET. Added in Windows 8; the L2 hook the "What's New" page introduced [15].

Each non-DISCARD layer has a DISCARD twin that fires when the engine has decided to drop a packet at that point. Callouts that need to log drops register at the DISCARD layer; callouts that need to inspect or modify register at the non-DISCARD twin [1].

ALE classification

The ALE shim sits across seven FWPM_LAYER_ALE_* filtering layers plus the two redirection layers introduced in the Windows 10 era [16]:

  • RESOURCE_ASSIGNMENT -- local endpoint assignment (bind).
  • AUTH_LISTEN -- TCP listen.
  • AUTH_RECV_ACCEPT -- inbound TCP accept; inbound UDP/ICMP first datagram.
  • AUTH_CONNECT -- outbound TCP connect; outbound UDP/ICMP first datagram.
  • FLOW_ESTABLISHED -- the stateful "connection now exists" event.
  • RESOURCE_RELEASE, ENDPOINT_CLOSURE -- teardown.
  • CONNECT_REDIRECT, BIND_REDIRECT -- the Windows 10 redirection hooks.

Stateful per-flow context lives in the ALE shim. Application identity at each ALE layer is a normalized file name; user identity is a security descriptor [13]. That pair is what turns "block port 443 outbound" into "block port 443 outbound from chrome.exe running as user S-1-5-21-...."

In-box callouts and downstream features

The "Built-in Callout Identifiers" reference page enumerates the GUIDs of every in-box callout: the FWPM_CALLOUT_IPSEC_* family (transport, tunnel, forward-tunnel, inbound-initiate-secure, ALE-connect); FWPM_CALLOUT_WFP_TRANSPORT_LAYER_V4_SILENT_DROP and _V6_SILENT_DROP; the FWPM_CALLOUT_TCP_CHIMNEY_* callouts [24]. Microsoft describes the four canonical roles a callout plays: "Deep Inspection... Packet Modification... Stream Modification... Data Logging" [25].

Callout Driver

A kernel driver that registers one or more callout functions with the filter engine. The engine invokes a callout's classifyFn when a filter at a layer specifies the callout's GUID as its action [12]. Callouts implement one of four roles: deep inspection (read-only payload examination), packet modification, stream modification, or data logging [25]. Every third-party network-security product on Windows that runs in the kernel ships a callout driver.

The downstream features are not peers of WFP. They are configurations of it.

  • Windows Defender Firewall with Advanced Security (WFAS). Microsoft Learn names this relationship verbatim: "The firewall application that is built into Windows Vista, Windows Server 2008, and later operating systems Windows Firewall with Advanced Security (WFAS) is implemented using WFP" [2]. The MpsSvc service translates the WFAS rule database into WFP filters that live in the MPSSVC_WSH provider's sublayer [4].
  • Windows IPsec. The Base Filtering Engine "plumbs configuration settings to other modules in the system. For example, IPsec negotiation polices go to IKE/AuthIP keying modules, filters go to the filter engine" [3]. IPsec is not a separate stack; it is a configuration of WFP plus the IKE/AuthIP keying modules.
  • WinNAT and Windows container networking. The PowerShell cmdlet New-NetNat "creates a Network Address Translation (NAT) object that translates an internal network address to an external network address" [26]; WinNAT, the implementation behind it, registers WFP filters to perform the translation. Windows containers use WinNAT for their default NAT switch.
  • Hyper-V Extensible Switch. Since Windows 8 / Server 2012, "the Hyper-V extensible switch is supported starting with NDIS 6.30 in Windows Server 2012," and the switch supports extensible-switch extensions that "bind within the extensible switch driver stack" [27]. WFP filters and callouts can be placed at vSwitch ingress and egress [15].
  • Microsoft Defender for Endpoint Network Protection. The Microsoft Learn page documents the capability: "Network Protection will block connections on all ports (not just 80 and 443)" [28]. The product enforces SmartScreen domain reputation across the entire process tree, not just the browser. The exact WFP-layer registration map is not publicly documented; Section 9 returns to it. "The exact WFP-layer registration map for Microsoft Defender for Endpoint Network Protection is not publicly documented." This is one of the rare honest-disclosure moments in the WFP story. Microsoft has published the capability [28] but has not published the exact set of FWPM_LAYER_* identifiers Network Protection registers callouts at. Community reverse engineering knows fragments of the map. Section 9 treats this as an open engineering problem.
  • Third-party EDR network filters. CrowdStrike Falcon, SentinelOne, Cisco Secure Endpoint, ESET, Sophos, and the rest of the EDR vendor list ship WFP callout drivers as the standard kernel-side primitive for network telemetry and policy enforcement. There is no single Microsoft document that lists them. Forshaw's 2021 Project Zero post is the closest a primary source comes to acknowledging that this is how the industry has settled [4].

Five downstream features on one engine. So what are the alternatives, if you want to ship a kernel-mode network filter on Windows today and do not want to use WFP?

7. Competing Approaches -- LWF, eBPF, Extensible Switch, and the Azure VFP

WFP is the L3+ answer. What else is there to attach to?

NDIS Lightweight Filter (LWF). The L2 sibling. NDIS 6.0, shipped with Vista, introduced "NDIS filter drivers. Filter drivers can monitor and modify the interaction between protocol drivers and miniport drivers. Filter drivers are easier to implement and have less processing overhead than NDIS intermediate drivers" [6]. LWF is the modern replacement for NDIS 5.x intermediate drivers. It sits below the protocol stack, sees raw Ethernet frames, has no application identity, and is the right choice for raw L2 work: VLAN tagging, EAPoL, packet capture (Npcap, NMNT). Choose LWF over WFP when you need pre-IP visibility and no per-process identity.

NDIS Lightweight Filter (LWF)

A kernel filter driver registered with NDIS that monitors or modifies the path between a protocol driver and a miniport driver. LWF replaced NDIS 5.x intermediate drivers starting with NDIS 6.0 [6]. LWF drivers see Ethernet frames before any IP processing has happened. They cannot see application identity, since the OS does not yet know which process the frame belongs to.

Hyper-V Extensible Switch extensions. A specialised NDIS LWF profile. NDIS 6.30, Windows Server 2012. "The Hyper-V extensible switch supports an interface that allows instances of NDIS filter drivers (known as extensible switch extensions) to bind within the extensible switch driver stack... The Hyper-V extensible switch is supported starting with NDIS 6.30 in Windows Server 2012" [27]. Extensions come in three roles -- capture, filter, and forwarding -- with one forwarding-extension slot per vSwitch. Choose extensible switch extensions for Hyper-V Network Virtualization, software-defined-networking overlays, or SR-IOV gating.

eBPF for Windows. A Microsoft-sponsored project to bring the Linux eBPF programming model to Windows. The GitHub README describes its scope as letting existing eBPF toolchains and APIs familiar from Linux be used on top of Windows, and frames the project as a work-in-progress [30]. Three deployment modes: native ("PREVAIL verifier... bpf2c tool converts every instruction in the bytecode to equivalent C statements... built into a windows driver module (stored in a .sys file)... This is the preferred way of deploying eBPF programs" [30]); JIT (user-mode service, "with HVCI enabled, eBPF programs cannot be JIT compiled, but can be run in the native mode" [30]); and interpreter (debug only). The hooks the project exposes (XDP, BIND, SOCK_ADDR, SOCK_OPS, CGROUP_SOCK_ADDR) are the Linux-flavoured analogues of the WFP shim points. The v1.1.0 release, published in March 2026 and labelled "first stable" while still tagged Pre-release, "added hard/soft permit verdicts" to its accept and bind hooks -- explicitly mirroring the WFP FWPS_RIGHT_ACTION_WRITE model [31]. The project's own pages page repeats the work-in-progress framing [32]. Choose eBPF for Windows for pre-stack DDoS scrubbing or cross-platform observability prototypes; the production-readiness caveat applies.

eBPF for Windows

A Microsoft-sponsored open-source project that ports the Linux eBPF execution and toolchain to Windows. The native deployment mode compiles eBPF bytecode through PREVAIL verification and the bpf2c translator into a signed .sys kernel driver, which preserves HVCI compatibility [30]. As of the v1.1.0 release (March 2026), the project remains tagged Pre-release on GitHub [31].

Azure VFP -- a name collision that requires disambiguation. The Azure host-SDN data plane, presented by Daniel Firestone at NSDI 2017 [33], is called the Virtual Filtering Platform. Same initials shape as WFP. Different platform. VFP is the programmable virtual switch that runs on every Azure compute host; the NSDI 2017 abstract notes that "VFP has been deployed on >1M hosts running IaaS and PaaS workloads for over 4 years" [33]. It uses match-action tables, layers (the word "layer" appears with a different semantic from WFP's), Unified Flow Tables, and AccelNet FPGA offload via the Generic Flow Table. VFP ships with Azure, on Azure hosts. It is not customer-buildable on a Windows desktop, and Windows desktop and Server SKUs do not run it. The platforms are unrelated despite the name overlap.

ApproachLayer / scopeApp identityBest for
WFP callout driverL3+ across approximately sixty FWPM_LAYER_* IDs [1]Yes via ALE [13]App-aware on-host filtering and EDR telemetry
NDIS LWFL2, below the protocol stack [6]NoRaw L2: capture, VLAN, EAPoL
Hyper-V Extensible Switch extInside the vSwitch, NDIS 6.30+ [27]Per-VM, not per-processHyper-V network virtualization, SDN overlays
eBPF for WindowsXDP / BIND / SOCK_ADDR hooks [30]PartialPre-stack DDoS, cross-platform observability prototypes (Pre-release)
Azure VFPAzure host SDN; not customer-buildable [33]N/AAzure-host SDN policy (Microsoft-internal)

None of these displaces WFP for the dominant on-host case (application-identity-aware, IPsec-integrated, stateful, multi-vendor-arbitrated). And all of them share one limit -- a limit that is built into the laws of network physics, not into Microsoft's roadmap.

8. Three Ceilings -- Encryption, Offload, Kernel EoP

Three ceilings sit above WFP and every alternative listed above. None is a Microsoft bug. All are structural.

The encryption ceiling

A WFP callout at the stream layer sees plaintext only if the payload was never encrypted, or if it was encrypted by a key the kernel owns (IPsec). IPsec is the one case where the kernel does hold the keys, because the IKE/AuthIP keying modules that BFE plumbs to are themselves Windows components [3]. Every other in-process TLS or QUIC stack keeps its keys away from the kernel. TLS 1.3 and QUIC are end-to-end encrypted from the callout's point of view; the keys are inside the application's user-mode TLS library. A callout that registers at FWPM_LAYER_STREAM_V4 and reads bytes off a Chrome HTTPS connection sees ciphertext.

The case is even sharper for QUIC. QUIC runs over UDP. From the first packet, almost all of the QUIC control plane is encrypted with a key derived from the connection's initial secret. A datagram-layer callout that wants to inspect the QUIC handshake -- not the payload, just the handshake -- cannot. Microsoft's own product team has acknowledged the limit in plain English on the Defender for Endpoint Network Protection page:

"Blocking FQDNs in non-Microsoft browsers requires that QUIC and Encrypted Client Hello be disabled in those browsers." -- Microsoft Defender for Endpoint, Network Protection [28]

That sentence is the encryption ceiling in Microsoft's own words. The product can block by 5-tuple (IP, port, protocol). It cannot block by hostname inside an Edge tab over QUIC unless QUIC is disabled in that browser. The limit is information-theoretic: a kernel filter without the session keys cannot read the encrypted payload. No engineering changes in WFP can lift it. The fix lives in the browser or in a user-mode TLS-inspecting proxy.

The offload ceiling

The second ceiling came from hardware. Modern NICs do work that the kernel used to do, because doing it in hardware is faster. UDP Receive Segment Coalescing Offload, the marquee feature of NDIS 6.89 in Windows 11 24H2, is the cleanest example: "URO enables network interface cards (NICs) to coalesce UDP receive segments. NICs can combine UDP datagrams from the same flow that match a set of rules into a logically contiguous buffer. These combined datagrams are then indicated to the Windows networking stack as a single large packet" [34].

The "logically contiguous buffer" is the problem. A WFP callout written against the pre-URO semantics ("one indication at FWPM_LAYER_DATAGRAM_DATA_V4 is one UDP datagram") is silently wrong on a system where the NIC has coalesced several datagrams into one Network Buffer List. The callout that needs per-datagram inspection has to read NDIS_UDP_RSC_OFFLOAD_NET_BUFFER_LIST_INFO to learn the per-flow size and unfold the indication accordingly [34]. The mechanical bound is that work the NIC has aggregated has lost its per-packet boundary by the time the kernel sees it.

The same shape repeats for TCP segmentation offload (TSO, LSO), receive offload (LRO, GRO), and TLS / IPsec / RDMA / VxLAN / GENEVE offload. Each one moves work to hardware. Each one weakens the kernel-filter assumption that "every packet flows past every layer."

The kernel attack surface

The third ceiling is the one that drives the CVE cadence. Every callout is a kernel module [25]. Every byte that crosses the Fwpm* user-to-kernel boundary is a potential primitive for an elevation-of-privilege exploit [19][20]. CVE-2023-29368, published June 14, 2023, is a CWE-415 double-free in the WFP code path with a CVSS base of 7.0 (AV:L/AC:H/PR:L/UI:N/S:U/C:H/I:H/A:H), an exploitability sub-score of 1.0, and an impact sub-score of 5.9 [19]. CVE-2024-38034, published July 9, 2024, is a CWE-190 integer overflow in the same family of code paths with a CVSS base of 7.8 (AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H), an exploitability sub-score of 1.8, and an impact sub-score of 5.9 [20].

The CVSS vector difference is worth reading carefully. The 2024 vulnerability's attack-complexity dropped from AC:H to AC:L. The exploitability sub-score rose from 1.0 to 1.8 over the same window. The 2024 bug is easier to weaponise [19][20]. Without speculating about the trend across a longer time series, the direction of travel between these two anchor CVEs is "down, not up."

There is a structural variant of the same story that does not require any memory-safety bug at all. In August 2021, Forshaw published a Project Zero post titled "Understanding Network Access in Windows AppContainers." The post documents a default-WFP-policy configuration that allows certain low-privilege AppContainer processes to reach the network without any of the capability SIDs (internetClient, internetClientServer, privateNetworkClientServer) that the AppContainer documentation suggests are required [4]. The associated Project Zero issue, 2207, was marked WontFix by Microsoft; the press coverage at SecurityAffairs reproduces the advisory body verbatim: "The default rules for the WFP connect layers permit certain executables to connect TCP sockets in AppContainers without capabilities leading to elevation of privilege... Eventually an AC process will match the 'Block Outbound Default Rule' rule if nothing else has which will block any connection attempt" [35]. The bug is a policy composition bug, not a code bug. It exists in the way the in-box sublayers, filter weights, and default rules interact -- which is precisely the surface this article spent Section 5 explaining.

WFP's hardest limits are not engineering choices Microsoft can rewrite. They are information-theoretic (a kernel filter without session keys cannot read what is encrypted), mechanical (hardware offloads exist to amortise work the kernel filter would have done, and aggregation destroys per-packet ground truth), and structural (every callout is a kernel module, and every Fwpm* call crosses a user-to-kernel ABI). The BFE elevation-of-privilege CVE class is the running cost of a platform sophisticated enough to host every downstream feature Windows ships.

Three ceilings. Is there a structural fix for any of them, or is this what the platform looks like forever?

9. Open Problems -- Where the Engineering Lives

Six questions are live right now. None of them has a clean answer.

QUIC inspection in the kernel. The current best partial result is to block QUIC by 5-tuple and rely on a browser's HTTP/3 fallback to TLS over TCP, where in-box inspection still works. The Defender for Endpoint Network Protection page documents the workaround verbatim: "Blocking FQDNs in non-Microsoft browsers requires that QUIC and Encrypted Client Hello be disabled in those browsers" [28]. Anything deeper than 5-tuple inspection on QUIC requires a user-mode proxy that terminates the QUIC connection and re-originates it, which moves the problem out of WFP.

Microsoft Defender for Endpoint's exact WFP-layer registration map. Publicly undocumented. Microsoft has published the capability and the limitations [28] but not the precise set of FWPM_LAYER_* GUIDs that Network Protection registers callouts at. Community reverse engineering knows fragments. A definitive map would let third-party EDR vendors avoid sublayer-priority conflicts with Defender. Whether Microsoft publishes one is a product-roadmap question.

The structural shape of the BFE EoP CVE class. Is the BFE elevation-of-privilege CVE class -- CWE-415 in 2023 [19], CWE-190 in 2024 [20], no public impossibility theorem either way -- tail risk inherent to the platform's policy-from-user-mode-to-kernel design, or is it addressable by an architectural fix (HVCI hardening on fwpkclnt.sys callout paths, bounded ABI contracts on the Fwpm* surface, Rust-in-Windows-kernel for new callout drivers)? The honest answer is that this is open. The integer-overflow / use-after-free class is the canonical attack surface of any user-to-kernel ABI; the question is whether Microsoft commits to a structural fix or to tail-risk-mitigation-plus-patching.

eBPF for Windows production readiness. Does it displace WFP for new kernel-mode network filters, or does it stay adjacent? The v1.1.0 release in March 2026 was framed as "first stable" while still labelled Pre-release [31]. The same release added hard/soft permit verdicts to its accept and bind hooks, explicitly mirroring FWPS_RIGHT_ACTION_WRITE in WFP [31]. That borrowing is a tell -- the project is converging on the WFP arbitration semantics, which suggests the long-term picture is "eBPF for Windows alongside WFP" rather than "eBPF replaces WFP." The market answer is unsettled.

Windows Defender Application Guard's egress-isolation pattern after WDAG deprecation. WDAG for Edge used a WFP-backed egress-isolation pattern to route browsing-container traffic out of an isolated network compartment. The WDAG product surface is being phased out -- Microsoft has documented that "Microsoft Defender Application Guard... is deprecated for Microsoft Edge for Business and will no longer be updated. Starting with Windows 11, version 24H2, Microsoft Defender Application Guard... is no longer available" [36]. The pattern's future on Windows -- in containers, virtualization-based security profiles, or some successor -- is undocumented as of the time of writing. Treat this paragraph as conjectural until Microsoft publishes a successor pattern.

NIC offload composability with kernel firewalls. As more pipeline elements move into the NIC -- TSO, LSO, GRO/GSO, URO [34], TLS offload, IPsec offload, RDMA, VxLAN, GENEVE -- the assumption that every packet flows past every WFP layer weakens. A callout that registers at FWPM_LAYER_INBOUND_TRANSPORT_V4 may never see a packet whose transport-layer work happened entirely on the NIC. The kernel-firewall design that grew up assuming software ground truth has to renegotiate that assumption release by release. NDIS 6.89's URO is the most recent example [17]; there will be more.

Six open problems. Now, how do you actually use the platform that has been the subject of this article?

10. The Four Ways You Touch WFP

Whether you are an administrator, a detection engineer, or a kernel driver writer, there are four canonical surfaces you actually touch. Here is the field guide.

The diagnostic surface: netsh wfp

Wikipedia's WFP page notes the introduction date: "Starting with Windows 7, the netsh command can diagnose of the internal state of WFP" [37]. The canonical incident-response triplet is three commands long.

A state.xml from netsh wfp show state is an XML document with one <item> per filter. Each item carries a <displayData> element with a name and description, the layer GUID, the sublayer GUID, the weight, and the action. Reading one is a matter of pattern recognition rather than parsing. The next snippet walks the structure on a hand-pasted fragment.

JavaScript Decoding a netsh wfp show state filter element
// A real-world 'netsh wfp show state' output contains many <item> elements
// inside <filters>. The fragment below is a single filter, hand-pasted from
// a 'show state' XML dump.
const xmlFragment = `
<item>
<filterKey>{deadbeef-1111-2222-3333-444455556666}</filterKey>
<displayData>
  <name>EDR-vendor outbound TCP inspect</name>
  <description>Vendor X deep-inspection callout filter</description>
</displayData>
<layerKey>FWPM_LAYER_ALE_AUTH_CONNECT_V4</layerKey>
<subLayerKey>{a0192d10-aaaa-bbbb-cccc-1234567890ab}</subLayerKey>
<weight>
  <type>FWP_UINT64</type>
  <uint64>0x4000000000000064</uint64>
</weight>
<action>
  <type>FWP_ACTION_CALLOUT_INSPECTION</type>
</action>
</item>
`;

function readFilter(xml) {
const get = (tag) => {
  const m = xml.match(new RegExp('<' + tag + '>([^<]+)</' + tag + '>'));
  return m ? m[1].trim() : null;
};
return {
  name:    get('name'),
  layer:   get('layerKey'),
  subLayer:get('subLayerKey'),
  weight:  get('uint64'),
  action:  get('type'),
};
}

console.log(readFilter(xmlFragment));
// {
//   name: 'EDR-vendor outbound TCP inspect',
//   layer: 'FWPM_LAYER_ALE_AUTH_CONNECT_V4',
//   subLayer: '{a0192d10-aaaa-bbbb-cccc-1234567890ab}',
//   weight: '0x4000000000000064',
//   action: 'FWP_ACTION_CALLOUT_INSPECTION'
// }

Press Run to execute.

Five fields: name, layer, sublayer, weight, action. That is what every WFP filter resolves to. Reading a hundred of them takes an afternoon.

The administrative surface: wf.msc

The Microsoft Management Console snap-in is the surface most Windows users have actually clicked. Every rule created in wf.msc is translated by the MpsSvc service into a WFP filter and pushed into the BFE's MPSSVC provider sublayer over RPC, and from there into TCPIP.SYS in the kernel [4]. The UI exposes a small fraction of the filter properties WFP actually models; advanced rule attributes (per-AppContainer SID, per-package family name, per-service hardening) live in the underlying filter only.

The networking surface: New-NetNat and Hyper-V NAT switches

The PowerShell cmdlet New-NetNat "creates a Network Address Translation (NAT) object that translates an internal network address to an external network address" [26]. Each NAT object materialises as a set of WFP filters that perform the translation. Windows containers use the same machinery for their default NAT switch. The Get-NetNat, Remove-NetNat, and related cmdlets in the NetNat PowerShell module are the entry point.

The driver surface: writing a WFP callout

The WDK's "Introduction to Windows Filtering Platform Callout Drivers" page is the entry point for kernel-mode writers [25]. The reference sample, WFPSampler, lives in the microsoft/Windows-driver-samples repository under network/trans/WFPSampler. The sample's description: "The WFPSampler sample driver is a sample firewall. It has a command-line interface which allows adding filters at various WFP layers with a wide variety of conditions. Additionally it exposes callout functions for injection, basic action, proxying, and stream inspection" [38]. The sample ships five components: WFPSampler.Exe, WFPSamplerService.Exe, WFPSamplerCalloutDriver.Sys, WFPSamplerProxyService.Exe, and the two libraries WFPSampler.Lib / WFPSamplerSys.Lib.

If you install WFPSampler and the installer refuses to register without a reboot prompt, the README documents a workaround: run RunDLL32 setupapi.dll,InstallHinfSection DefaultInstall 131 wfpsampler.inf (note the 131), and RunDLL32 setupapi.dll,InstallHinfSection DefaultInstall 132 wfpsampler.inf for the corresponding uninstall codepath [38]. The 131/132 flags suppress the reboot prompt for the in-tree sample driver.

A WFP callout driver that originates kernel-mode network I/O should pair with Winsock Kernel.

Winsock Kernel (WSK)

"Winsock Kernel (WSK) is a kernel-mode Network Programming Interface (NPI)" [39]. WSK is the modern replacement for TDI as the kernel-mode sockets API on Windows Vista and later. Microsoft's WSK introduction makes the split explicit: "Filter drivers should implement the Windows Filtering Platform on Windows Vista, and TDI clients should implement WSK" [39]. WFP filters traffic. WSK opens sockets from inside the kernel. The two interfaces are siblings.

Should I write a kernel driver, or can I do this in user mode?

Before writing a callout driver, ask: does the policy need per-packet kernel visibility, or would a user-mode service that consumes ETW events from Microsoft-Windows-WFP and the firewall's ETW providers be enough? Most logging and detection use cases are answered by ETW. A callout driver is justified when you need to act on traffic (drop, redirect, modify, inspect payload), not just observe it. The kernel attack surface that comes with a callout, documented in Section 8, is now yours to share once you ship.

The detection-engineering surface lives in ETW. The two providers to know are Microsoft-Windows-WFP and Microsoft-Windows-Windows Firewall With Advanced Security. Names are not enough to do the full subject justice; the cross-reference footer below points at the dedicated ETW article in this series.

You now have a mental map of every place WFP touches a Windows host -- under the firewall UI, under IPsec, under WinNAT, under the Hyper-V vSwitch, under Defender for Endpoint, under every EDR. The FAQ disarms the last eight misconceptions.

11. Frequently Asked Questions

Frequently asked questions

Is WFP the firewall?

No. WFP is the platform; the Windows Firewall (WFAS, service name MpsSvc) is one consumer of it. Microsoft's start page makes the relationship explicit: "Windows Firewall with Advanced Security (WFAS) is implemented using WFP" [2]. The Base Filtering Engine service (bfe) hosts the user-mode side of WFP and accepts policy from MpsSvc over RPC [4]. Two user-mode services and a kernel-mode classification path, one platform.

Does fwpkclnt.sys do the filtering?

No. fwpkclnt.sys is the kernel-mode WFP client and export driver. Callout drivers link against fwpkclnt.lib, whose in-memory form is fwpkclnt.sys [23]. The classification path -- the code that walks sublayers and filters -- runs primarily inside NETIO.SYS, as Forshaw documents in his Project Zero post [4]. The shorthand "fwpkclnt.sys is the filter engine" is common online and incorrect.

Is BFE the same as MpsSvc?

No. BFE (service name bfe) is the Base Filtering Engine -- the platform service that controls WFP and plumbs configuration to other modules, including IPsec keying [3]. MpsSvc is the Windows Defender Firewall service. MpsSvc depends on bfe; the dependency is not reciprocal [4].

Does WFP see TLS or QUIC payloads?

No. WFP callouts see plaintext only for non-IPsec, non-TLS payloads, or for IPsec traffic where the kernel holds the keys. TLS 1.3 and QUIC are end-to-end encrypted from a callout's perspective; the keys live in user-mode TLS libraries inside the application. Microsoft's own Defender for Endpoint Network Protection documentation acknowledges the limit: "Blocking FQDNs in non-Microsoft browsers requires that QUIC and Encrypted Client Hello be disabled in those browsers" [28]. Section 8 calls this the encryption ceiling.

Did SecureNAT come back?

No. SecureNAT is an ISA Server / Forefront Threat Management Gateway concept, retired with TMG. The modern Windows-host NAT on WFP is WinNAT, managed by the New-NetNat PowerShell cmdlet [26]. Windows containers use WinNAT for their default NAT switch. The original input scope that informed this article erroneously referenced "SecureNAT" as a WFP consumer; the focus-premise audit corrected it to WinNAT before drafting began.

Is WSK an acronym for 'Windows Sockets Kernel'?

No. WSK is Winsock Kernel. Microsoft Learn's introduction is unambiguous: "Winsock Kernel (WSK) is a kernel-mode Network Programming Interface (NPI)" [39]. The two-letter prefix is "Winsock," the original Windows Sockets API brand, not "Windows Sockets."

Did the 2024 SharePoint CVE-2024-21318 break BFE?

No. CVE-2024-21318 is a Microsoft SharePoint Server deserialization remote code execution vulnerability, unrelated to the Base Filtering Engine. The 2024 WFP elevation-of-privilege vulnerability is CVE-2024-38034: a CWE-190 integer overflow with a CVSS base of 7.8 [20]. The article's source-verification stage flagged the original scope's CVE attribution error before drafting; the article tracks CVE-2024-38034 and CVE-2023-29368 as the two anchor BFE CVEs.

Can a WFP callout block QUIC?

Only at the 5-tuple level (IP, port, protocol) before or after a connection establishes. Once a QUIC connection is up, the encryption ceiling applies and the kernel has no key for the encrypted payload [28]. FQDN-level blocking of QUIC over Network Protection requires QUIC to be disabled in the browser, per Microsoft's own troubleshooting guide [28]. Deep inspection of QUIC content from the kernel is not possible with WFP alone.


See also. The Microsoft-Windows-WFP and Microsoft-Windows-Windows Firewall ETW providers are how detection-engineering teams see WFP from outside the kernel; the dedicated ETW article in this series goes deeper on the provider names, manifests, and parsing. The Antimalware Scan Interface (AMSI) sits on the process-side path that complements WFP's network-side path; the two are siblings, not substitutes. And the \Device\Ipfilterdriver device object that this article retired in Section 3 lives in the Windows Object Manager namespace, whose architecture is the subject of the Object Manager article in this series.

Study guide

Key terms

Windows Filtering Platform (WFP)
Cross-mode kernel/user-mode filtering service introduced in Windows Vista that replaced NDIS-IM, filter-hook, TDI-filter, and Winsock LSP as the in-box network filtering surface.
Base Filtering Engine (BFE)
The Windows service (bfe) that controls WFP and plumbs configuration to other modules. Not the same as MpsSvc.
Filter Engine
The core WFP component that stores filters and performs filter arbitration. Hosted in both kernel mode and user mode; kernel classification runs primarily in NETIO.SYS.
Shim
Kernel-mode bridge between a network-stack module and the filter engine. Vista shipped six: ALE, Transport Layer Module, Network Layer Module, ICMP Error, Discard, Stream.
Application Layer Enforcement (ALE)
Set of WFP layers used for stateful filtering and the only layers where filters can match on application identity (normalized file name) and user identity (security descriptor).
Sublayer
Priority-ordered subdivision of a WFP filtering layer. Vendors are expected to create their own sublayer via FwpmSubLayerAdd0.
Filter Weight
64-bit value ordering filter evaluation within a sublayer. May be set as an explicit FWP_UINT64, generated by BFE (FWP_EMPTY), or partitioned into one of 16 high-order ranges via FWP_UINT8.
Callout Driver
Kernel driver registered with the filter engine that performs deep inspection, packet modification, stream modification, or data logging when a filter selects it.
fwpkclnt.sys
Kernel-mode WFP client / export driver. Callouts link against fwpkclnt.lib; the in-memory module is fwpkclnt.sys. Not the filter engine.
Winsock Kernel (WSK)
Kernel-mode sockets NPI introduced in Vista. WFP filters traffic; WSK opens sockets from inside the kernel. Replaces TDI for kernel-mode socket clients.
NDIS Lightweight Filter (LWF)
L2 filter driver introduced in NDIS 6.0 to replace NDIS 5.x intermediate drivers. Sees Ethernet frames before IP processing; no application identity.

Comprehension questions

  1. Why did the four pre-WFP hooks (NDIS-IM, filter-hook, TDI-filter, LSP) fail collectively?

    Each hook had a specific architectural flaw -- one callback pointer with no documented chaining for filter-hook, no application identity for NDIS-IM, a deprecated substrate for TDI, in-process bypass for LSP. Together those flaws made multi-vendor coexistence impossible, which the Pawar/Stenson 2006 WinHEC deck pinned at 12 percent of all OS crashes.

  2. What is the difference between BFE and MpsSvc?

    BFE is the Base Filtering Engine (the WFP platform service). MpsSvc is the Windows Defender Firewall service (one consumer of the platform). MpsSvc depends on BFE; the dependency is one-way.

  3. How does WFP arbitrate two filters at the same layer with the same priority?

    Filters live inside sublayers. Sublayers are priority-ordered; filters within a sublayer are weight-ordered. Hard Block and Hard Permit are terminal; Soft Block and Soft Permit can be overridden by a later evaluator; Block overrides Permit when nothing else terminates evaluation.

  4. Why can a WFP callout not block QUIC by hostname?

    QUIC encrypts almost all of its control plane from the first byte using a key derived from the connection's initial secret. The kernel has no access to that key; the keys live in the application's user-mode QUIC stack. WFP can block QUIC only at the 5-tuple level. FQDN blocking requires QUIC and Encrypted Client Hello to be disabled in the browser, per Microsoft's own Network Protection documentation.

  5. What is the structural reason BFE keeps producing elevation-of-privilege CVEs?

    Every callout is a kernel module and every Fwpm* call crosses a user-to-kernel ABI. The integer-overflow / use-after-free attack surface is intrinsic to that boundary. CVE-2023-29368 (CWE-415, CVSS 7.0) and CVE-2024-38034 (CWE-190, CVSS 7.8) are the two anchor CVEs of the class; the 2024 vulnerability is rated easier to weaponise than the 2023 one.

References

  1. Management Filtering Layer Identifiers. Microsoft Learn. https://learn.microsoft.com/en-us/windows/win32/fwp/management-filtering-layer-identifiers-
  2. Windows Filtering Platform -- Start Page. Microsoft Learn. https://learn.microsoft.com/en-us/windows/win32/fwp/windows-filtering-platform-start-page
  3. About Windows Filtering Platform. Microsoft Learn. https://learn.microsoft.com/en-us/windows/win32/fwp/about-windows-filtering-platform
  4. James Forshaw (2021). Understanding Network Access in Windows AppContainers. Google Project Zero. https://projectzero.google/2021/08/understanding-network-access-windows-app.html
  5. Windows Firewall. Wikipedia. https://en.wikipedia.org/wiki/Windows_Firewall
  6. NDIS Filter Drivers. Microsoft Learn (WDK). https://learn.microsoft.com/en-us/windows-hardware/drivers/network/ndis-filter-drivers
  7. IOCTL_PF_SET_EXTENSION_POINTER (filter-hook driver). Microsoft Learn (legacy). https://learn.microsoft.com/en-us/previous-versions/windows/hardware/network/ff548976(v=vs.85)
  8. Categorizing Layered Service Providers and Applications. Microsoft Learn. https://learn.microsoft.com/en-us/windows/win32/winsock/categorizing-layered-service-providers-and-applications
  9. Network Driver Design Guide -- TDI deprecation. Microsoft Learn (legacy). https://learn.microsoft.com/en-us/previous-versions/windows/hardware/network/ff565094(v=vs.85)
  10. Madhurima Pawar & Eric Stenson (2006). Windows Filtering Platform And Winsock Kernel: Next-Generation Kernel Networking APIs. WinHEC 2006, Seattle (Microsoft). https://www.slideshare.net/slideshow/windows-filtering-platform-and-winsock-kernel/1394068
  11. Windows Vista. Wikipedia. https://en.wikipedia.org/wiki/Windows_Vista
  12. Filter Engine. Microsoft Learn (WDK). https://learn.microsoft.com/en-us/windows-hardware/drivers/network/filter-engine
  13. Application Layer Enforcement (ALE). Microsoft Learn. https://learn.microsoft.com/en-us/windows/win32/fwp/application-layer-enforcement--ale-
  14. (2010). A Windows Filtering Platform (WFP) driver hotfix rollup package is available for Windows Vista, Windows Server 2008, Windows 7, and Windows Server 2008 R2. Microsoft Support (KB981889). https://support.microsoft.com/en-us/help/981889/a-windows-filtering-platform-wfp-driver-hotfix-rollup-package-is-avail
  15. What's New in Windows Filtering Platform. Microsoft Learn. https://learn.microsoft.com/en-us/windows/win32/fwp/what-s-new-in-windows-filtering-platform
  16. ALE Layers. Microsoft Learn. https://learn.microsoft.com/en-us/windows/win32/fwp/ale-layers
  17. Introduction to NDIS 6.89. Microsoft Learn (WDK). https://learn.microsoft.com/en-us/windows-hardware/drivers/network/introduction-to-ndis-6-89
  18. Windows 11, version 24H2. Wikipedia. https://en.wikipedia.org/wiki/Windows_11,_version_24H2
  19. (2023). CVE-2023-29368 -- Windows Filtering Platform Elevation of Privilege Vulnerability. NIST National Vulnerability Database. https://nvd.nist.gov/vuln/detail/CVE-2023-29368
  20. (2024). CVE-2024-38034 -- Windows Filtering Platform Elevation of Privilege Vulnerability. NIST National Vulnerability Database. https://nvd.nist.gov/vuln/detail/CVE-2024-38034
  21. Filter Arbitration. Microsoft Learn. https://learn.microsoft.com/en-us/windows/win32/fwp/filter-arbitration
  22. Filter Weight Assignment. Microsoft Learn. https://learn.microsoft.com/en-us/windows/win32/fwp/filter-weight-assignment
  23. Windows Filtering Platform Architecture Overview. Microsoft Learn (WDK). https://learn.microsoft.com/en-us/windows-hardware/drivers/network/windows-filtering-platform-architecture-overview
  24. Built-in Callout Identifiers. Microsoft Learn (WDK). https://learn.microsoft.com/en-us/windows-hardware/drivers/network/built-in-callout-identifiers
  25. Introduction to Windows Filtering Platform Callout Drivers. Microsoft Learn (WDK). https://learn.microsoft.com/en-us/windows-hardware/drivers/network/introduction-to-windows-filtering-platform-callout-drivers
  26. New-NetNat. Microsoft Learn. https://learn.microsoft.com/en-us/powershell/module/netnat/new-netnat
  27. Hyper-V Extensible Switch. Microsoft Learn (WDK). https://learn.microsoft.com/en-us/windows-hardware/drivers/network/hyper-v-extensible-switch
  28. Microsoft Defender for Endpoint -- Network Protection. Microsoft Learn. https://learn.microsoft.com/en-us/defender-endpoint/network-protection
  29. Mark Russinovich, David Solomon, Alex Ionescu, Pavel Yosifovich, & Andrea Allievi (2021). Windows Internals, Part 2. Microsoft Press. ISBN 978-0-13-546240-9. - 7th edition; Networking chapter covers WFP architecture.
  30. eBPF for Windows -- README. microsoft/ebpf-for-windows on GitHub. https://github.com/microsoft/ebpf-for-windows
  31. (2025). eBPF for Windows -- Releases (v1.1.0). microsoft/ebpf-for-windows on GitHub. https://github.com/microsoft/ebpf-for-windows/releases
  32. eBPF for Windows -- GitHub Pages. Microsoft. https://microsoft.github.io/ebpf-for-windows/
  33. Daniel Firestone (2017). VFP: A Virtual Switch Platform for Host SDN in the Public Cloud. USENIX NSDI 17. https://www.usenix.org/conference/nsdi17/technical-sessions/presentation/firestone
  34. UDP Receive Segment Coalescing Offload (URO). Microsoft Learn (WDK). https://learn.microsoft.com/en-us/windows-hardware/drivers/network/udp-rsc-offload
  35. (2021). Microsoft WFP AppContainer rules permit a low-privilege bypass (Project Zero issue 2207). Security Affairs. https://securityaffairs.com/121370/hacking/microsoft-wfp-appcontainer-bypass.html
  36. Microsoft Edge and Microsoft Defender Application Guard. Microsoft Learn. https://learn.microsoft.com/en-us/deployedge/microsoft-edge-security-windows-defender-application-guard
  37. Windows Filtering Platform. Wikipedia. https://en.wikipedia.org/wiki/Windows_Filtering_Platform
  38. WFPSampler -- Windows-driver-samples (network/trans/WFPSampler). microsoft/Windows-driver-samples on GitHub. https://github.com/microsoft/Windows-driver-samples/tree/main/network/trans/WFPSampler
  39. Introduction to Winsock Kernel (WSK). Microsoft Learn (WDK). https://learn.microsoft.com/en-us/windows-hardware/drivers/network/introduction-to-winsock-kernel