# Windows Filtering Platform: The Kernel-Mode Firewall You Don't See

> The Windows Filtering Platform is the kernel-mode engine under wf.msc, IPsec, WinNAT, the Hyper-V vSwitch, and every modern Windows EDR.

*Published: 2026-05-12*
*Canonical: https://paragmali.com/blog/windows-filtering-platform-the-kernel-mode-firewall-you-dont*
*License: CC BY 4.0 - https://creativecommons.org/licenses/by/4.0/*

---
Open `wf.msc`. Right-click "Inbound Rules," click "New Rule," fill in the form, click OK. You think you just configured a firewall. What you actually did was register one filter, inside one sublayer, at one of roughly sixty filtering layers in the kernel-mode classification path of a platform you have never named. The same platform is also running IPsec, container networking, Microsoft Defender for Endpoint's network protection, and every third-party EDR's network-telemetry pipeline on the Windows host you are using right now.

<TLDR>
The Windows Filtering Platform (WFP) is the kernel- and user-mode service Microsoft shipped with Windows Vista in November 2006 to replace four mutually-incompatible XP-era hooks: NDIS intermediate drivers, the filter-hook IOCTL on `\Device\Ipfilterdriver`, Winsock Layered Service Providers, and TDI filter drivers. It is the substrate beneath Windows Defender Firewall, Windows IPsec, WinNAT, the Hyper-V Extensible Switch, Defender for Endpoint Network Protection, and every third-party EDR's network telemetry. WFP is not a firewall. It is the platform that a firewall is one consumer of. It arbitrates competing security products deterministically through 64-bit filter weights inside priority-ordered sublayers, and that arbitration model is the load-bearing reason third-party callouts can finally coexist on the same host. The same kernel-extensibility tax that doomed the pre-WFP hooks now resurfaces as a steady drip of Base Filtering Engine elevation-of-privilege CVEs (CVE-2023-29368, CVE-2024-38034) -- the running cost of a platform sophisticated enough to host every downstream network-security feature Windows ships.
</TLDR>

## 1. You Just Clicked OK on Sixty Filtering Layers

The firewall UI is the visible one percent of WFP. Almost every modern Windows network-security feature is a configuration of the same engine.

That is the central claim of this article, and it is the kind of statement that sounds like marketing until you trace the actual wires. Trace them once and you stop seeing "Windows Defender Firewall" and "IPsec" and "Windows containers" as separate products. They are all clients of the same kernel/user-mode service, configuring the same filter engine, arbitrated by the same Base Filtering Engine, classified across the same approximately sixty `FWPM_LAYER_*` identifiers [@wfp-layers].

<Definition term="Windows Filtering Platform (WFP)">
Microsoft's cross-mode network-traffic filtering service introduced in Windows Vista and Windows Server 2008. WFP "is designed to replace previous packet filtering technologies such as Transport Driver Interface (TDI) filters, Network Driver Interface Specification (NDIS) filters, and Winsock Layered Service Providers (LSP)" [@wfp-start]. The platform has five components: the Filter Engine, the Base Filtering Engine, a set of kernel-mode shims, callout drivers, and the management API [@wfp-about].
</Definition>

<Definition term="Base Filtering Engine (BFE)">
A Windows service named `bfe` that, in Microsoft's own words, "controls the operation of the Windows Filtering Platform" and "plumbs configuration settings to other modules in the system. For example, IPsec negotiation polices go to IKE/AuthIP keying modules, filters go to the filter engine" [@wfp-about]. The BFE is not the Windows Firewall. The Windows Firewall is a separate service (`MpsSvc`) that talks to the BFE.
</Definition>

The naming is the first thing that trips readers. There is a service called BFE and a service called MpsSvc. They live in different rows of `Get-Service` output. They have different binary backings. The dependency arrow runs one way: MpsSvc requires BFE, never the other direction. That asymmetry, which seems pedantic, turns out to be load-bearing for the rest of the story. WFP is the platform. The firewall is a tenant.

> **Key idea:** The firewall UI is the visible one percent of WFP. Almost every modern Windows network-security feature -- Windows Defender Firewall with Advanced Security, Windows IPsec, WinNAT and container networking, the Hyper-V Extensible Switch, Microsoft Defender for Endpoint Network Protection, every third-party EDR with a network filter -- is a configuration of the same engine [@forshaw-2021].

If WFP is the engine, what was there before it? Why did Microsoft need to build a platform when Windows XP SP2 had already shipped a firewall?

## 2. Before WFP -- An Internet on Fire

April 2004. Sasser is propagating through the LSASS RPC interface on port 445, infecting unpatched Windows machines within minutes of their first cable plug. Microsoft has just shipped Windows XP SP2, with the Internet Connection Firewall rebranded as "Windows Firewall" and turned on by default for the first time [@wiki-winfw].<Sidenote>Wikipedia notes that "the ongoing prevalence of these worms through 2004 resulted in unpatched machines being infected within a matter of minutes," and that Microsoft "switched it on by default since Windows XP SP2." XP SP2 reached general availability on August 25, 2004 [@wiki-winfw].</Sidenote> That fixed the worm problem. It did not fix the plumbing problem.

The plumbing problem was that third-party security vendors were already hooking the Windows network stack at four different, mutually incompatible places, none of which arbitrated with the others. ZoneAlarm, Norton Internet Security, McAfee, Kerio, Check Point, BlackICE, and a dozen others were shipping kernel drivers that bolted onto Windows wherever they could find a callable surface [@wiki-winfw][@forshaw-2021]. They picked four families.

**Network Driver Interface Specification (NDIS) intermediate drivers.** NDIS 5.x exposed a profile called the intermediate driver that sat below the protocol stack and above the miniport. A vendor could install a driver that saw every Ethernet frame on the way up and every IP packet on the way down. The price was complexity: NDIS intermediate drivers had to participate in the entire NDIS binding state machine, and Microsoft's own documentation later admitted that the model was painful enough that the platform team replaced it with the much simpler NDIS Lightweight Filter (LWF) in NDIS 6.0 [@ndis-filter].

**Filter-hook drivers on `\Device\Ipfilterdriver`.** The IP filter driver exposed a single IOCTL, `IOCTL_PF_SET_EXTENSION_POINTER`, that registered a single callback function the kernel would invoke on every received or transmitted IP packet [@ipfilter-legacy]. There was one callback pointer per machine. IPv4 only. Network layer only. No documented contract for what happened when a second vendor registered.

**Winsock Layered Service Providers (LSPs).** A user-mode shim chained into every Winsock application, in process. LSPs had access to per-application context, but their cost was paid in blast radius: Microsoft's own categorisation guide warned that "certain system critical processes such as winlogon and lsass create sockets" and that "a number of cases have also been documented where buggy LSPs can cause `lsass.exe` to crash. If lsass crashes, the system forces a shutdown" [@lsp-categories].

<Definition term="Winsock Layered Service Provider (LSP)">
A user-mode DLL that chains into the Winsock service-provider stack of every process that opens a socket. LSPs were the Windows mechanism for content inspection and per-application network rules before Vista. They are still installable, but Microsoft's documentation now categorises which processes must not load them because of the lsass-crash failure mode [@lsp-categories].
</Definition>

**TDI filter drivers.** The Transport Driver Interface, the legacy kernel interface above TCP/IP, supported a filter-driver pattern that preserved application identity and could veto connections at the transport. It was the cleanest of the four options. It also stopped being a viable target the moment Microsoft deprecated TDI in Vista: "The TDI feature is deprecated and will be removed in future versions of Microsoft Windows. Depending on how you use TDI, use either the Winsock Kernel (WSK) or Windows Filtering Platform (WFP)" [@tdi-legacy].

Four hooks, four failure modes, no arbitration between any of them. In May 2006 Madhurima Pawar and Eric Stenson of Windows Networking walked the WinHEC audience through one number that captured the consequence: firewall and antivirus conflicts accounted for 12 percent of all Windows operating-system crashes [@pawar-stenson-winhec].

<PullQuote>
"Reduces firewall and anti-virus crashes -- 12% of all OS crashes." -- Madhurima Pawar and Eric Stenson, WinHEC 2006 [@pawar-stenson-winhec]
</PullQuote>

That is the design motivation for WFP in twelve words. The XP-era hook zoo was not a security architecture; it was a steady source of bluescreens. Microsoft's documentation reads, looking back at the era from Vista: "Starting in Windows Server 2008 and Windows Vista, the firewall hook and the filter hook drivers are not available; applications that were using these drivers should use WFP instead" [@wfp-start]. As Forshaw later summarised it, "these firewalls were implemented by hooking into Network Driver Interface Specification (NDIS) drivers or implementing user-mode Winsock Service Providers but this was complex and error prone" [@forshaw-2021].

<Mermaid caption="The four pre-WFP extensibility points and where they sat in the Windows network stack">
flowchart TD
    NIC[Physical NIC] --> MINI[NDIS miniport driver]
    MINI --> IM["NDIS 5.x intermediate driver<br/>(hook #1: NDIS-IM)"]
    IM --> TCPIP[TCPIP.SYS]
    TCPIP -.-> IPF["\Device\Ipfilterdriver<br/>(hook #2: filter-hook IOCTL)"]
    TCPIP --> TDI["TDI transport providers"]
    TDI --> TDIF["TDI filter driver<br/>(hook #3: TDI filter)"]
    TDIF --> AFD[AFD.SYS]
    AFD --> WS2[ws2_32.dll Winsock]
    WS2 --> LSP["Winsock LSP chain<br/>(hook #4: in-process LSP)"]
    LSP --> APP[Application]
</Mermaid>

So why didn't Microsoft just fix the hooks? Why a whole new platform?

## 3. Why Four Hooks Could Not Be Saved

Picture a Windows XP machine in 2005, four months past SP2. The user, doing what users do, installs two antivirus suites: one from a free trial that came with the laptop, one from work. Each ships a kernel driver. Each one calls `IOCTL_PF_SET_EXTENSION_POINTER` on `\Device\Ipfilterdriver` to register a packet-inspection callback [@ipfilter-legacy]. An hour later the machine bluescreens during a Windows Update download.

The Microsoft documentation for the IOCTL is precise about what the call does ("registers filter-hook callback functions to the IP filter driver to inform the IP filter driver to call those filter hook callbacks for every IP packet that is received or transmitted") and silent about what happens if a second driver makes the same call before the first one unregisters [@ipfilter-legacy]. The page does not document chaining semantics. There is no mention of a registration list, a callback array, a refcount, or a priority. The driver writers got to invent that themselves, separately, in shipped products. The crash reports speak for the result.

> **Note:** Microsoft Learn documents the filter-hook registration mechanism on `\Device\Ipfilterdriver` exactly once, in the legacy reference for `IOCTL_PF_SET_EXTENSION_POINTER` [@ipfilter-legacy]. The page tells you how to register a callback. It does not tell you what happens when two callers register concurrently. That gap is the architectural bug. The 12-percent-of-OS-crashes number from WinHEC 2006 is the bill [@pawar-stenson-winhec].

Each of the four pre-WFP hooks had a specific architectural flaw. Together those flaws define what WFP had to be.

**Filter-hook (IpFilterDriver).** One callback pointer per machine; no arbitration; IPv4 only; network layer only. Two security products fight over one callback, and there is no documented way to chain them. Failure: arbitration impossible, vendor coexistence accidental.

**NDIS 5.x intermediate driver.** High complexity, no application identity (it sees frames, not processes), install-order-dependent binding chains. Microsoft's own assessment of the model, written for the LWF replacement that came in 2006, is: "Filter drivers are easier to implement and have less processing overhead than NDIS intermediate drivers" [@ndis-filter]. Failure: too low for app-aware policy, too painful to write.

**TDI filter.** Preserved application identity. Vetoed connections at the transport boundary. Architecturally the cleanest of the four. Then Microsoft deprecated TDI in Vista [@tdi-legacy] and the substrate evaporated. Failure: the floor disappeared.

**Winsock LSP.** In-process. User mode. Bypassable by any program that called `Nt*` system services directly. And, as the Microsoft categorisation page documents, a buggy LSP that crashes LSASS will take down the entire machine [@lsp-categories]. Failure: in process, bypassable, lethal when buggy.

| Pre-WFP hook | Layer | App identity | Multi-vendor | Failure mode | Successor |
|---|---|---|---|---|---|
| Filter-hook (`IpFilterDriver`) | Network (L3) | No | No documented contract for chaining | Arbitration impossible [@ipfilter-legacy] | WFP filter at `INBOUND_IPPACKET_*` |
| NDIS 5.x intermediate | Data link (L2) | No | Install-order dependent | Too low for app-aware rules; complex [@ndis-filter] | NDIS Lightweight Filter (LWF) |
| TDI filter | Transport (L4) | Yes | Yes (chainable) | Substrate deprecated in Vista [@tdi-legacy] | WFP ALE + Winsock Kernel (WSK) |
| Winsock LSP | Above sockets (user mode) | Yes | Chainable in-process | In-process bypass; lsass blast radius [@lsp-categories] | WFP ALE; LSP retained for non-security uses |

Walk those failure modes column by column and a design constraint set falls out. Whatever Microsoft was going to build had to:

1. Arbitrate multiple vendors deterministically. No more "first IOCTL wins."
2. Carry application identity through to the inspection point.
3. Concentrate inspection at one platform, not four.
4. Run out of process where possible. A buggy callout cannot be allowed to take down LSASS.
5. Resolve conflicts predictably, with rules a third-party developer can read and design against.

<Mermaid caption="Why filter-hook fails: one callback pointer, two security products, no documented chaining">
sequenceDiagram
    participant A as Vendor A installer
    participant B as Vendor B installer
    participant K as \Device\Ipfilterdriver
    participant P as IP packet path
    A->>K: IOCTL_PF_SET_EXTENSION_POINTER(callback_A)
    Note over K: callback = callback_A
    B->>K: IOCTL_PF_SET_EXTENSION_POINTER(callback_B)
    Note over K: callback = callback_B (no chaining contract)
    P->>K: packet arrives
    K->>B: callback_B(packet)
    Note over A: callback_A no longer invoked, vendor A stops working
    A->>K: re-register callback_A
    Note over K: race: pointer flips again
    K--xP: inconsistent state, BSOD
</Mermaid>

Vista shipped November 2006. What did the architects build to satisfy all five constraints at once?

## 4. The Evolution -- Five Generations of WFP

May 23-25, 2006, Seattle. Madhurima Pawar, Program Manager in Windows Networking, and Eric Stenson, Development Lead in Windows Networking, stand in front of a hostile room of third-party firewall ISVs at WinHEC and present "Windows Filtering Platform And Winsock Kernel: Next-Generation Kernel Networking APIs." Slide 6 carries the design motivation that this article opened on: 12 percent of all OS crashes are firewall and AV conflicts. Slide 7 carries the architecture diagram [@pawar-stenson-winhec]. Six months later Vista shipped, with the filter-hook and firewall-hook drivers gone from the system and a new platform in their place [@wfp-start].<Sidenote>Windows Vista was released to manufacturing on November 8, 2006, and made generally available to consumers on January 30, 2007 [@wiki-vista].</Sidenote>

### Generation 1: WFP v1 in Vista and Server 2008

WFP v1 introduced five named components. They are still the components the platform ships today. Microsoft's own "About Windows Filtering Platform" page enumerates them: the Filter Engine ("the core multi-layer filtering infrastructure, hosted in both kernel-mode and user-mode"); the Base Filtering Engine ("a service that controls the operation of the Windows Filtering Platform"); shims ("kernel-mode components that reside between the kernel-mode network stack and the filter engine"); callout drivers; and the management API [@wfp-about].

<Definition term="Filter Engine">
The core of WFP. Microsoft's WDK reference defines it as "a component of the Windows Filtering Platform that stores filters and performs filter arbitration. Filters are added to the filter engine at designated filtering layers so that the filter engine can perform the desired filtering action (permit, drop, or a callout). If a filter in the filter engine specifies a callout for the filter's action, the filter engine calls the callout's classifyFn function" [@wfp-filter-engine]. The engine is hosted in both kernel mode and user mode; its kernel classification path runs primarily inside `NETIO.SYS` [@forshaw-2021].
</Definition>

<Definition term="Shim">
A kernel-mode bridge between a specific network stack module and the WFP filter engine. Vista shipped six shims: the Application Layer Enforcement (ALE) shim, the Transport Layer Module shim, the Network Layer Module shim, the ICMP Error shim, the Discard shim, and the Stream shim [@wfp-about]. Each shim invokes the filter engine at one or more `FWPM_LAYER_*` identifiers when traffic crosses it.
</Definition>

The most consequential of those six shims is ALE.

<Definition term="Application Layer Enforcement (ALE)">
"A set of Windows Filtering Platform (WFP) kernel-mode layers that are used for stateful filtering" [@wfp-ale]. ALE keeps per-connection state across packets, and -- this is the line that separates ALE from the rest of the platform -- "ALE layers are the only WFP layers where network traffic can be filtered based on the application identity -- using a normalized file name -- and based on the user identity -- using a security descriptor" [@wfp-ale]. ALE is why per-application firewall rules became possible in 2006. It is also the layer that classifies AppContainer connections in modern Windows.
</Definition>

ALE pays for stateful filtering with bandwidth, not latency. The Microsoft Learn page makes the performance claim explicit: at ALE layers, the platform "minimally impacts network performance by processing only the first packet in a connection" [@wfp-about]. Subsequent packets ride the existing flow state. That choice is what lets a per-process firewall rule scale to gigabit network rates.

<Aside label="The 2010 hotfix-rollup story">
April 12, 2010. Microsoft ships a Windows Filtering Platform driver hotfix rollup, KB981889, that bundles three previously-separate fixes into one package. The Microsoft Support page enumerates them verbatim [@kb981889]:

*KB976759 -- "WFP drivers may cause a failure to disconnect the RDP connection to a multiprocessor computer."*

*KB979278 -- "Using two Windows Filtering Platform (WFP) drivers causes a computer to crash."*

*KB979223 -- "A nonpaged pool memory leak occurs when you use a WFP callout driver."*

Read KB979278 again. *Two WFP drivers cause a crash.* The XP-era "two AV vendors fight" bug had survived into the new platform, in a different shape: the WFP arbitration model held -- the conflict between filters was deterministic -- but the *callout driver lifecycle* had not yet been hardened. That distinction is the structural seed of the BFE elevation-of-privilege CVE class fifteen years later. Section 8 returns to it.
</Aside>

### Generation 2: WFP v2 in Windows 8 and Server 2012

Windows 8 and Server 2012 shipped a refresh in 2012. The "What's New in Windows Filtering Platform" page enumerates the delta in four bullets [@wfp-whatsnew]:

> "Layer 2 filtering: Provides access to the L2 (MAC) layer, allowing filtering of traffic at that layer. vSwitch filtering: Allows packets traversing a vSwitch to be inspected and/or modified. WFP filters or callouts can be used at the vSwitch ingress and egress. App container management: Allows access to information about app containers and network isolation connectivity issues. IPsec updates: Extended IPsec functionality including connection state monitoring, certificate selection, and key management." [@wfp-whatsnew]

Four features, but the second one -- vSwitch filtering -- is the architecturally significant one. With Windows 8, WFP slid under the Hyper-V Extensible Switch. From that release forward, every Hyper-V VM's packet path is a WFP-extensible classification problem, and the same kernel-mode platform that filters host traffic also filters tenant traffic [@wfp-whatsnew].

### Generation 3: Windows 10 ALE redirection (2015-2021)

The Windows 10 family added two ALE layers that did not exist in Vista: `CONNECT_REDIRECT` and `BIND_REDIRECT`. The "ALE Layers" page lists them at the bottom of its enumeration [@wfp-ale-layers]. Their job is exactly what their names say -- redirect an outbound connection (proxy it through a different address), or redirect a bind (force a process to bind to a different local endpoint). Web proxies, transparent forwarders, and AppContainer policy now had a kernel-side hook that did not exist before. Forshaw's 2021 Project Zero post documents how the modern Windows Defender Firewall pipeline runs through these layers end-to-end: "MPSSVC converts its ruleset to the lower-level WFP firewall filters and sends them over RPC to the Base Filtering Engine (BFE) service. These filters are then uploaded to the TCP/IP driver (TCPIP.SYS) in the kernel... The evaluation is handled primarily by the NETIO driver as well as registered callout drivers" [@forshaw-2021].

### Generation 4: URO and the CVE drumbeat (2022-2024)

The most recent generation comes in two parallel tracks. The first is a hardware offload feature. NDIS 6.89, the version of the NDIS driver interface that "is included in Windows 11, version 24H2 and Windows Server 2022 and later," adds support for UDP Receive Segment Coalescing Offload, "this hardware offload enables NICs to coalesce UDP receive segments. NICs can combine UDP datagrams from the same flow that match a set of rules into a logically contiguous buffer. These combined datagrams are then indicated to the Windows networking stack as a single large packet" [@ndis-689]. Windows 11 24H2 reached general availability on October 1, 2024 [@wiki-win11-24h2].

The second track is a sequence of elevation-of-privilege CVEs in the Base Filtering Engine. CVE-2023-29368, published June 14, 2023, is a CWE-415 double-free with a CVSS base of 7.0 [@nvd-2023-29368]. CVE-2024-38034, published July 9, 2024, is a CWE-190 integer overflow with a CVSS base of 7.8 [@nvd-2024-38034]. The 2024 vulnerability's attack-complexity sub-score dropped from `AC:H` (high) in 2023 to `AC:L` (low) in 2024. The exploitability sub-score rose from 1.0 to 1.8 over the same interval [@nvd-2023-29368][@nvd-2024-38034]. The trend line is that BFE EoP is getting easier to weaponise, not harder.

<Mermaid caption="WFP component architecture: filter engine, BFE, shims, callouts, and the user-mode API">
flowchart TD
    UM["User-mode application<br/>(e.g. wf.msc / netsh / MpsSvc)"] --> API["Fwpm* management API<br/>(fwpuclnt.dll)"]
    API --> BFE["Base Filtering Engine service<br/>(bfe, user mode)"]
    BFE --> FE["Filter Engine<br/>(kernel + user mode)"]
    FE --> KCLI["fwpkclnt.sys<br/>(kernel-mode WFP client / export driver)"]
    FE --> NETIO["NETIO.SYS<br/>(classification path)"]
    NETIO --> ALE["ALE shim"]
    NETIO --> TLM["Transport-Layer shim"]
    NETIO --> NLM["Network-Layer shim"]
    NETIO --> STREAM["Stream shim"]
    NETIO --> ICMP["ICMP-Error shim"]
    NETIO --> DISC["Discard shim"]
    ALE --> COUT["Callout drivers<br/>(IPsec, in-box stealth, EDR, 3rd-party)"]
    TLM --> COUT
    NLM --> COUT
    STREAM --> COUT
    ICMP --> COUT
    DISC --> COUT
</Mermaid>

<Mermaid caption="WFP generations: Vista to Windows 11 24H2 (timeline source citations follow in adjacent prose)">
timeline
    title Five generations of the Windows Filtering Platform
    2006-11 : Windows Vista / Server 2008 -- WFP v1 (filter engine, BFE, six shims, callouts)
    2010-04 : KB981889 hotfix rollup -- three named WFP driver bugs, including two-WFP-drivers crash
    2012-09 : Windows 8 / Server 2012 -- WFP v2 (L2, vSwitch, AppContainer, IPsec extensions)
    2015-21 : Windows 10 -- ALE CONNECT_REDIRECT / BIND_REDIRECT, AppContainer-aware ALE
    2023-06 : CVE-2023-29368 published (CWE-415 double-free, CVSS 7.0)
    2024-07 : CVE-2024-38034 published (CWE-190 integer overflow, CVSS 7.8)
    2024-10 : Windows 11 24H2 -- NDIS 6.89 adds URO (UDP receive coalescing)
</Mermaid>

Timeline sources, in row order: WinHEC 2006 and the Vista release on the Microsoft Learn WFP start page [@pawar-stenson-winhec][@wfp-start]; KB981889 [@kb981889]; the "What's New" page [@wfp-whatsnew]; ALE Layers [@wfp-ale-layers] and Forshaw 2021 [@forshaw-2021]; the NVD records for CVE-2023-29368 and CVE-2024-38034 [@nvd-2023-29368][@nvd-2024-38034]; NDIS 6.89 introduction and the Windows 11 24H2 GA date [@ndis-689][@wiki-win11-24h2].

Five generations, one engine, no replacements. Why does the same engine still ship in 2026? What is the architectural insight that made it last?

## 5. Sublayers, Weights, and Veto -- The Arbitration Insight

Here is the question every Windows administrator has wondered: how do two competing security products coexist on the same machine without crashing each other? Before Vista the honest answer was, "they didn't, mostly, and when they did it was an accident." After Vista the honest answer is, "WFP arbitrates them deterministically." The mechanism is the load-bearing piece of the platform, and it is built out of two ideas.

### Idea 1: Sublayers and weights

Microsoft's "Filter Arbitration" page describes the algorithm in two sentences that almost no Windows administrator has read:

> "Each filter layer is divided into sub-layers ordered by priority (also called weight). Network traffic traverses sub-layers from the highest priority to the lowest priority... Within each sub-layer, filters are ordered by weight. Network traffic is indicated to matching filters from highest weight to lowest weight." [@wfp-arbitration]

A layer (say, `FWPM_LAYER_ALE_AUTH_CONNECT_V4`, the place where outbound IPv4 TCP connection authorization is decided) contains an ordered list of sublayers. Each sublayer contains an ordered list of filters. Sublayer priority orders the sublayers. Filter weight orders the filters within a sublayer. Network traffic walks the structure top-down, sublayer by sublayer, filter by filter, until a terminal action is reached.

<Definition term="Sublayer">
A named, priority-ordered subdivision of a WFP filtering layer. Each sublayer owns a list of filters and has its own GUID. Microsoft's recommendation, in the filter-weight documentation, is that independent vendors "create their own sublayer by using `FwpmSubLayerAdd0`" rather than register filters into another vendor's sublayer [@wfp-weight]. Sublayer priority is what lets two vendors coexist without interfering.
</Definition>

<Definition term="Filter Weight">
A 64-bit value attached to a filter that orders evaluation within a sublayer. The "Filter Weight Assignment" page documents three legal assignment styles: "Set the weight to an FWP_UINT64. BFE uses the supplied weight as is. Set the weight to FWP_EMPTY. BFE automatically generates a weight in the range [0, 2^60). Set the weight to an FWP_UINT8 in the range [0, 15]. BFE uses the supplied weight as a weight range identifier" [@wfp-weight]. Sixteen high-order weight ranges, $[0, 2^{60})$ within each, give vendors a way to carve out non-overlapping neighbourhoods.
</Definition>

The mathematical model is simpler than the prose suggests. Filter weight is an element of $[0, 2^{64})$. A filter at weight $w_1$ runs before a filter at weight $w_2$ inside the same sublayer if $w_1 > w_2$. Sublayer priority orders the sublayers themselves. When a vendor registers its sublayer at, say, priority 0x1000 and chooses filters in the weight range $[2^{60}, 2^{61})$, that vendor has a deterministic neighbourhood that no other vendor will trample, provided the other vendors follow Microsoft's recommendation to call `FwpmSubLayerAdd0` and use their own sublayer.<Sidenote>The 16-range partitioning via `FWP_UINT8` weights is the mechanism that the platform team baked in to give vendors a coordination protocol without requiring vendors to talk to each other. Microsoft Learn's recommendation, verbatim: "This issue can be prevented by having callouts create their own sublayer by using `FwpmSubLayerAdd0`" [@wfp-weight].</Sidenote>

### Idea 2: Block-overrides-Permit with Veto

Filter arbitration is actually two passes, not one. Within a single sublayer, the engine evaluates the filters that match in weight order from highest to lowest, and stops at the first filter that returns Permit or Block. That first matching filter wins; lower-weight filters in the same sublayer never run. The engine then performs the same pass on the next sublayer down. Once every sublayer has produced a verdict, the BFE composes those per-sublayer verdicts into one per-layer decision -- and that is where Block-over-Permit and the soft/hard override flag come in. Filter Arbitration states the second pass:

> "'Block' overrides 'Permit'. 'Block' is final (cannot be overridden) and stops the evaluation. The packet is discarded." [@wfp-arbitration]

"Block" and "Permit" each come in two variants. The variant is set by a per-action flag, `FWPS_RIGHT_ACTION_WRITE`, in the callout's classify-output structure: "If the flag is set, it indicates that the action can be overridden. If the flag is absent, the action cannot be overridden" [@wfp-arbitration]. The four-cell table below is the override-policy table the BFE uses to compose per-sublayer verdicts into one layer-level action.

| Action | Override allowed? | Common name | What it means |
|---|---|---|---|
| Permit + `FWPS_RIGHT_ACTION_WRITE` | Yes | Soft permit | A lower-priority sublayer's verdict (composed later by the BFE) may overturn it [@wfp-arbitration] |
| Permit, flag absent | No | Hard permit | Final permit; only a callout Veto in another sublayer can block. [@wfp-arbitration] |
| Block + `FWPS_RIGHT_ACTION_WRITE` | Yes | Soft block | A lower-priority sublayer may overturn it, but Block-over-Permit still applies if no override fires [@wfp-arbitration] |
| Block, flag absent | No | Hard block | Final block. Evaluation stops. Packet discarded. [@wfp-arbitration] |

The soft/hard distinction is therefore a cross-sublayer property, not a within-sublayer one. Within a sublayer the rule is "first match wins"; only the composition step between sublayers consults the override flag.

There is a fifth case. A callout that returns `FWP_ACTION_BLOCK` while it could have returned `FWP_ACTION_PERMIT` is exercising what the documentation calls a *Veto*. The callout has been given the opportunity to authorize a packet and has refused. That is how a third-party EDR's deep-inspection callout can refuse a flow that an in-box filter has already soft-permitted, without ever knowing the soft-permit happened: the engine offers the packet, the callout says no, and the no is final.

<Mermaid caption="Cross-sublayer arbitration: how soft/hard verdicts from each sublayer compose into one layer-level decision">
sequenceDiagram
    participant E as Filter engine
    participant S1 as Sublayer @ priority 100 (no matching filter)
    participant S2 as Sublayer @ priority 50 (winner: soft permit)
    participant S3 as Sublayer @ priority 10 (winner: hard permit)
    participant C as Deep-inspection callout (registered in default sublayer)
    E->>S1: evaluate highest-priority sublayer
    S1-->>E: no matching filter (Continue)
    E->>S2: evaluate next sublayer
    S2-->>E: Soft Permit (FWPS_RIGHT_ACTION_WRITE)
    Note over E: tentative layer action = Permit (overridable)
    E->>S3: evaluate next sublayer
    S3-->>E: Hard Permit (no override flag)
    Note over E: layer action = Permit (final unless a callout vetoes)
    E->>C: invoke callout for the permitted flow
    C-->>E: Veto -> Block (terminal)
    Note over E: final layer-level action = Block
</Mermaid>

Walk a worked example. An [AppContainer](/blog/appcontainer-and-lowbox-tokens-windowss-capability-sandbox/) process (an Edge tab, say, or any process launched with `CreateProcess` and an AppContainer SID token) tries to open an outbound TCP connection to `203.0.113.5:443`. The Windows TCP/IP stack invokes the ALE shim, which classifies the connection request at `FWPM_LAYER_ALE_AUTH_CONNECT_V4`. The filter engine walks the sublayers at that layer from highest priority to lowest. Within each sublayer, filters fire highest-weight-first, and the first matching Permit or Block ends evaluation in that sublayer. If a vendor EDR has placed a Veto-style deep-inspection callout in its own sublayer, the callout runs and can deny the connection regardless of what any other sublayer would have done. If no filter explicitly permits the AppContainer with the matching capability SID (`internetClient`, `internetClientServer`, or `privateNetworkClientServer`), the "Block Outbound Default Rule" filter in the firewall's default sublayer fires last and the connection is denied [@forshaw-2021].

<RunnableCode lang="js" title="Cross-sublayer arbitration (faithful pseudocode for one layer's sublayer chain)">{`
// Faithful translation of the Microsoft Learn "Filter Arbitration" algorithm
// for the cross-sublayer composition pass. The within-sublayer pass (not
// shown) returns one verdict per sublayer using a first-match-wins rule on
// weight-ordered filters. This function composes those per-sublayer verdicts
// into the layer-level action using FWPS_RIGHT_ACTION_WRITE semantics.
// Source: https://learn.microsoft.com/en-us/windows/win32/fwp/filter-arbitration

const SOFT_PERMIT = { action: 'Permit', override: true  };
const HARD_PERMIT = { action: 'Permit', override: false };
const SOFT_BLOCK  = { action: 'Block',  override: true  };
const HARD_BLOCK  = { action: 'Block',  override: false };

// Each element is the winning verdict from one sublayer, ordered by sublayer
// priority from highest to lowest.
const sublayerVerdicts = [
  // Vendor EDR deep-inspection callout, hard block on a known-bad destination
  { sublayer: 'EDR-veto',     priority: 100n, match: (pkt) => pkt.dst === '203.0.113.5',
                              verdict: () => HARD_BLOCK },
  // Windows Defender Firewall app rule, allow-with-override
  { sublayer: 'WDF-allow',    priority:  50n, match: () => true,
                              verdict: () => SOFT_PERMIT },
  // Block Outbound Default Rule (BFE default sublayer)
  { sublayer: 'block-default',priority:  10n, match: () => true,
                              verdict: () => HARD_BLOCK },
];

function composeAcrossSublayers(packet, sublayers) {
  // Higher priority composes first
  const ordered = [...sublayers].sort((a, b) => Number(b.priority - a.priority));
  let tentative = null;
  for (const s of ordered) {
    if (!s.match(packet)) continue;
    const v = s.verdict();
    if (!v.override) {
      // Hard action: final, composition stops
      return { decision: v.action, by: s.sublayer };
    }
    // Soft action: remember, but keep composing in case a lower-priority
    // sublayer issues a hard verdict or a Block (Block overrides Permit).
    if (tentative === null || v.action === 'Block') {
      tentative = { decision: v.action, by: s.sublayer };
    }
  }
  return tentative ?? { decision: 'Permit', by: 'no-match-default' };
}

console.log(composeAcrossSublayers({ dst: '203.0.113.5' }, sublayerVerdicts));
// -> { decision: 'Block', by: 'EDR-veto' }  (hard block at priority 100)

console.log(composeAcrossSublayers({ dst: '198.51.100.7' }, sublayerVerdicts));
// -> { decision: 'Block', by: 'block-default' } (soft permit overridden by hard block)
`}</RunnableCode>

> **Key idea:** Two competing Windows security products coexist on the same host because each one owns its own sublayer, with its own weight neighbourhood. Within a sublayer the BFE picks one winner using "first matching Permit or Block stops evaluation." Across sublayers the BFE composes those winners using "Block overrides Permit, hard actions are final, soft actions can be overridden." Pre-Vista, Windows had filters. Post-Vista, Windows has arbitration.

The engine arbitrates filters deterministically and separates condition-match (the filter) from action (the callout). What does the modern surface look like, in 2026, with two decades of features bolted on top?

## 6. The Modern WFP Surface

It is 2026. WFP is twenty years old, has never been replaced, and ships under more components than any other Windows networking primitive. Here is what it looks like today.

### The filter engine and its kernel client

The filter engine is the same architectural piece WFP v1 shipped with: a cross-mode classifier whose kernel-mode classification path runs primarily inside `NETIO.SYS` and whose user-mode side runs inside the Base Filtering Engine service host process [@wfp-arch][@forshaw-2021]. Callouts and filter consumers do not link against `NETIO.SYS`. They link against a different binary.

<Definition term="fwpkclnt.sys (kernel-mode WFP client / export driver)">
The kernel-mode WFP client and export driver. Callout drivers and other kernel components link against `fwpkclnt.lib`, whose in-memory module is `fwpkclnt.sys` [@wfp-arch]. The driver is the API surface that callouts use to register, classify, and call back into the engine. The classification path itself, where filters are matched and actions chosen, runs primarily in `NETIO.SYS`. The shorthand "fwpkclnt.sys *is* the filter engine" is common in blog posts and incorrect; the two binaries do different jobs.
</Definition>

The BFE-vs-MpsSvc split is the second confusion to clear. `bfe` is the Base Filtering Engine, the platform service [@wfp-about]. `MpsSvc` is the Windows Defender Firewall service, one consumer of the platform. The dependency goes one way: `MpsSvc` depends on `bfe`; `bfe` does not depend on `MpsSvc`.<Sidenote>You can verify the dependency direction on any running Windows box. `Get-Service bfe`, `Get-Service mpssvc`, then `Get-Service mpssvc | Select-Object -ExpandProperty ServicesDependedOn` will list `BFE` (among others); the reverse query on `bfe` lists no dependency on `mpssvc`. Forshaw's 2021 post documents the same arrow from the policy side: "MPSSVC converts its ruleset to the lower-level WFP firewall filters and sends them over RPC to the Base Filtering Engine (BFE) service" [@forshaw-2021].</Sidenote>

### Roughly sixty filtering layers

Microsoft's "Management Filtering Layer Identifiers" reference enumerates about sixty `FWPM_LAYER_*` GUIDs, organised by shim, direction (inbound, outbound, forward), stage (pre-IPsec, post-IPsec, discard), and IP version (v4 / v6) [@wfp-layers]. The reference page is dense, but reading it once teaches the structure. A small sample of representative layers:

- `FWPM_LAYER_INBOUND_IPPACKET_V4` and `_V6`. "Located in the receive path just after the IP header of a received packet has been parsed but before any IP header processing takes place. No IPsec decryption or reassembly has occurred" [@wfp-layers]. The earliest visibility a callout has into a received packet.
- `FWPM_LAYER_OUTBOUND_IPPACKET_V4` and `_V6`. The send-path twin.
- `FWPM_LAYER_IPFORWARD_V4` and `_V6`. The routing-decision point on a forwarding host [@wfp-layers].
- `FWPM_LAYER_INBOUND_TRANSPORT_V4` and `_V6`. After the TCP/UDP/ICMP header has been parsed but before payload delivery [@wfp-layers].
- `FWPM_LAYER_STREAM_V4` and `_V6`. The TCP stream layer where reassembled byte streams are visible [@wfp-layers].
- `FWPM_LAYER_DATAGRAM_DATA_V4` and `_V6`. Connectionless data delivery (UDP / ICMP) [@wfp-layers].
- `FWPM_LAYER_INBOUND_MAC_FRAME_ETHERNET`. Added in Windows 8; the L2 hook the "What's New" page introduced [@wfp-whatsnew].

Each non-DISCARD layer has a DISCARD twin that fires when the engine has decided to drop a packet at that point. Callouts that need to log drops register at the DISCARD layer; callouts that need to inspect or modify register at the non-DISCARD twin [@wfp-layers].

### ALE classification

The ALE shim sits across seven `FWPM_LAYER_ALE_*` filtering layers plus the two redirection layers introduced in the Windows 10 era [@wfp-ale-layers]:

- `RESOURCE_ASSIGNMENT` -- local endpoint assignment (`bind`).
- `AUTH_LISTEN` -- TCP `listen`.
- `AUTH_RECV_ACCEPT` -- inbound TCP accept; inbound UDP/ICMP first datagram.
- `AUTH_CONNECT` -- outbound TCP `connect`; outbound UDP/ICMP first datagram.
- `FLOW_ESTABLISHED` -- the stateful "connection now exists" event.
- `RESOURCE_RELEASE`, `ENDPOINT_CLOSURE` -- teardown.
- `CONNECT_REDIRECT`, `BIND_REDIRECT` -- the Windows 10 redirection hooks.

Stateful per-flow context lives in the ALE shim. Application identity at each ALE layer is a normalized file name; user identity is a security descriptor [@wfp-ale]. That pair is what turns "block port 443 outbound" into "block port 443 outbound from `chrome.exe` running as user `S-1-5-21-...`."

### In-box callouts and downstream features

The "Built-in Callout Identifiers" reference page enumerates the GUIDs of every in-box callout: the `FWPM_CALLOUT_IPSEC_*` family (transport, tunnel, forward-tunnel, inbound-initiate-secure, ALE-connect); `FWPM_CALLOUT_WFP_TRANSPORT_LAYER_V4_SILENT_DROP` and `_V6_SILENT_DROP`; the `FWPM_CALLOUT_TCP_CHIMNEY_*` callouts [@wfp-builtin-callouts]. Microsoft describes the four canonical roles a callout plays: "Deep Inspection... Packet Modification... Stream Modification... Data Logging" [@wfp-callouts].

<Definition term="Callout Driver">
A kernel driver that registers one or more callout functions with the filter engine. The engine invokes a callout's `classifyFn` when a filter at a layer specifies the callout's GUID as its action [@wfp-filter-engine]. Callouts implement one of four roles: deep inspection (read-only payload examination), packet modification, stream modification, or data logging [@wfp-callouts]. Every third-party network-security product on Windows that runs in the kernel ships a callout driver.
</Definition>

The downstream features are not peers of WFP. They are configurations of it.

- **Windows Defender Firewall with Advanced Security (WFAS).** Microsoft Learn names this relationship verbatim: "The firewall application that is built into Windows Vista, Windows Server 2008, and later operating systems Windows Firewall with Advanced Security (WFAS) is implemented using WFP" [@wfp-start]. The `MpsSvc` service translates the WFAS rule database into WFP filters that live in the `MPSSVC_WSH` provider's sublayer [@forshaw-2021].
- **Windows IPsec.** The Base Filtering Engine "plumbs configuration settings to other modules in the system. For example, IPsec negotiation polices go to IKE/AuthIP keying modules, filters go to the filter engine" [@wfp-about]. IPsec is not a separate stack; it is a configuration of WFP plus the IKE/AuthIP keying modules.
- **WinNAT and Windows container networking.** The PowerShell cmdlet `New-NetNat` "creates a Network Address Translation (NAT) object that translates an internal network address to an external network address" [@netnat]; WinNAT, the implementation behind it, registers WFP filters to perform the translation. Windows containers use WinNAT for their default NAT switch.
- **Hyper-V Extensible Switch.** Since Windows 8 / Server 2012, "the Hyper-V extensible switch is supported starting with NDIS 6.30 in Windows Server 2012," and the switch supports extensible-switch extensions that "bind within the extensible switch driver stack" [@hyperv-extswitch]. WFP filters and callouts can be placed at vSwitch ingress and egress [@wfp-whatsnew].
- **Microsoft Defender for Endpoint Network Protection.** The Microsoft Learn page documents the capability: "Network Protection will block connections on all ports (not just 80 and 443)" [@mde-netprot]. The product enforces SmartScreen domain reputation across the entire process tree, not just the browser. The exact WFP-layer registration map is not publicly documented; Section 9 returns to it.<Sidenote>"The exact WFP-layer registration map for Microsoft Defender for Endpoint Network Protection is not publicly documented." This is one of the rare honest-disclosure moments in the WFP story. Microsoft has published the capability [@mde-netprot] but has not published the exact set of `FWPM_LAYER_*` identifiers Network Protection registers callouts at. Community reverse engineering knows fragments of the map. Section 9 treats this as an open engineering problem.</Sidenote>
- **Third-party EDR network filters.** CrowdStrike Falcon, SentinelOne, Cisco Secure Endpoint, ESET, Sophos, and the rest of the EDR vendor list ship WFP callout drivers as the standard kernel-side primitive for network telemetry and policy enforcement. There is no single Microsoft document that lists them. Forshaw's 2021 Project Zero post is the closest a primary source comes to acknowledging that this is how the industry has settled [@forshaw-2021].

<Aside label="Where Windows Internals, Part 2 covers this">
The textbook reference for WFP architecture is *Windows Internals, Part 2*, 7th edition, by Russinovich, Solomon, Ionescu, Yosifovich, and Allievi (Microsoft Press, 2021) [@windows-internals-7th]. The book's Networking chapter walks through TCP/IP driver internals and WFP architecture together, including the filter-engine / BFE / shim taxonomy this article has used. Treat the book as the slow-read complement to the Microsoft Learn references; the chapter does not duplicate the Learn pages, it explains why the architecture chose the shape it did. Page numbers vary by printing; cite by chapter heading.
</Aside>

Five downstream features on one engine. So what are the alternatives, if you want to ship a kernel-mode network filter on Windows today and do not want to use WFP?

## 7. Competing Approaches -- LWF, eBPF, Extensible Switch, and the Azure VFP

WFP is the L3+ answer. What else is there to attach to?

**NDIS Lightweight Filter (LWF).** The L2 sibling. NDIS 6.0, shipped with Vista, introduced "NDIS filter drivers. Filter drivers can monitor and modify the interaction between protocol drivers and miniport drivers. Filter drivers are easier to implement and have less processing overhead than NDIS intermediate drivers" [@ndis-filter]. LWF is the modern replacement for NDIS 5.x intermediate drivers. It sits below the protocol stack, sees raw Ethernet frames, has no application identity, and is the right choice for raw L2 work: VLAN tagging, EAPoL, packet capture (Npcap, NMNT). Choose LWF over WFP when you need pre-IP visibility and no per-process identity.

<Definition term="NDIS Lightweight Filter (LWF)">
A kernel filter driver registered with NDIS that monitors or modifies the path between a protocol driver and a miniport driver. LWF replaced NDIS 5.x intermediate drivers starting with NDIS 6.0 [@ndis-filter]. LWF drivers see Ethernet frames before any IP processing has happened. They cannot see application identity, since the OS does not yet know which process the frame belongs to.
</Definition>

**Hyper-V Extensible Switch extensions.** A specialised NDIS LWF profile. NDIS 6.30, Windows Server 2012. "The Hyper-V extensible switch supports an interface that allows instances of NDIS filter drivers (known as extensible switch extensions) to bind within the extensible switch driver stack... The Hyper-V extensible switch is supported starting with NDIS 6.30 in Windows Server 2012" [@hyperv-extswitch]. Extensions come in three roles -- capture, filter, and forwarding -- with one forwarding-extension slot per vSwitch. Choose extensible switch extensions for Hyper-V Network Virtualization, software-defined-networking overlays, or SR-IOV gating.

**eBPF for Windows.** A Microsoft-sponsored project to bring the Linux eBPF programming model to Windows. The GitHub README describes its scope as letting existing eBPF toolchains and APIs familiar from Linux be used on top of Windows, and frames the project as a work-in-progress [@ebpf-readme]. Three deployment modes: native ("PREVAIL verifier... `bpf2c` tool converts every instruction in the bytecode to equivalent C statements... built into a windows driver module (stored in a .sys file)... This is the preferred way of deploying eBPF programs" [@ebpf-readme]); JIT (user-mode service, "with HVCI enabled, eBPF programs cannot be JIT compiled, but can be run in the native mode" [@ebpf-readme]); and interpreter (debug only). The hooks the project exposes (XDP, BIND, SOCK_ADDR, SOCK_OPS, CGROUP_SOCK_ADDR) are the Linux-flavoured analogues of the WFP shim points. The v1.1.0 release, published in March 2026 and labelled "first stable" while still tagged Pre-release, "added hard/soft permit verdicts" to its accept and bind hooks -- explicitly mirroring the WFP `FWPS_RIGHT_ACTION_WRITE` model [@ebpf-releases]. The project's own pages page repeats the work-in-progress framing [@ebpf-pages]. Choose eBPF for Windows for pre-stack DDoS scrubbing or cross-platform observability prototypes; the production-readiness caveat applies.

<Definition term="eBPF for Windows">
A Microsoft-sponsored open-source project that ports the Linux eBPF execution and toolchain to Windows. The native deployment mode compiles eBPF bytecode through PREVAIL verification and the `bpf2c` translator into a signed `.sys` kernel driver, which preserves HVCI compatibility [@ebpf-readme]. As of the v1.1.0 release (March 2026), the project remains tagged Pre-release on GitHub [@ebpf-releases].
</Definition>

**Azure VFP -- a name collision that requires disambiguation.** The Azure host-SDN data plane, presented by Daniel Firestone at NSDI 2017 [@firestone-nsdi17], is called the Virtual Filtering Platform. Same initials shape as WFP. Different platform. VFP is the programmable virtual switch that runs on every Azure compute host; the NSDI 2017 abstract notes that "VFP has been deployed on >1M hosts running IaaS and PaaS workloads for over 4 years" [@firestone-nsdi17]. It uses match-action tables, layers (the word "layer" appears with a different semantic from WFP's), Unified Flow Tables, and AccelNet FPGA offload via the Generic Flow Table. VFP ships with Azure, on Azure hosts. It is not customer-buildable on a Windows desktop, and Windows desktop and Server SKUs do not run it. The platforms are unrelated despite the name overlap.

> **Note:** The Azure Virtual Filtering Platform (VFP), introduced in Firestone's NSDI 2017 paper, is the Azure host SDN data plane and shares only an acronym shape with the Windows Filtering Platform [@firestone-nsdi17]. VFP runs on Azure hosts under the Hyper-V Extensible Switch and is the layer that powers SLB, NSGs, AccelNet, and Azure Virtual Network. It is unrelated to the WFP filter engine, BFE, or `fwpkclnt.sys`. If the title of your inquiry contains both names, you are almost certainly looking at one or the other; the focus-premise audit in this article's source notes flagged the original input's mention of "SecureNAT" as similar terminological drift that led to the wrong product.

| Approach | Layer / scope | App identity | Best for |
|---|---|---|---|
| WFP callout driver | L3+ across approximately sixty `FWPM_LAYER_*` IDs [@wfp-layers] | Yes via ALE [@wfp-ale] | App-aware on-host filtering and EDR telemetry |
| NDIS LWF | L2, below the protocol stack [@ndis-filter] | No | Raw L2: capture, VLAN, EAPoL |
| Hyper-V Extensible Switch ext | Inside the vSwitch, NDIS 6.30+ [@hyperv-extswitch] | Per-VM, not per-process | Hyper-V network virtualization, SDN overlays |
| eBPF for Windows | XDP / BIND / SOCK_ADDR hooks [@ebpf-readme] | Partial | Pre-stack DDoS, cross-platform observability prototypes (Pre-release) |
| Azure VFP | Azure host SDN; not customer-buildable [@firestone-nsdi17] | N/A | Azure-host SDN policy (Microsoft-internal) |

None of these displaces WFP for the dominant on-host case (application-identity-aware, IPsec-integrated, stateful, multi-vendor-arbitrated). And all of them share one limit -- a limit that is built into the laws of network physics, not into Microsoft's roadmap.

## 8. Three Ceilings -- Encryption, Offload, Kernel EoP

Three ceilings sit above WFP and every alternative listed above. None is a Microsoft bug. All are structural.

### The encryption ceiling

A WFP callout at the stream layer sees plaintext only if the payload was never encrypted, or if it was encrypted by a key the kernel owns (IPsec).<MarginNote>IPsec is the one case where the kernel does hold the keys, because the IKE/AuthIP keying modules that BFE plumbs to are themselves Windows components [@wfp-about]. Every other in-process TLS or QUIC stack keeps its keys away from the kernel.</MarginNote> TLS 1.3 and QUIC are end-to-end encrypted from the callout's point of view; the keys are inside the application's user-mode TLS library. A callout that registers at `FWPM_LAYER_STREAM_V4` and reads bytes off a Chrome HTTPS connection sees ciphertext.

The case is even sharper for QUIC. QUIC runs over UDP. From the first packet, almost all of the QUIC control plane is encrypted with a key derived from the connection's initial secret. A datagram-layer callout that wants to inspect the QUIC handshake -- not the payload, just the handshake -- cannot. Microsoft's own product team has acknowledged the limit in plain English on the Defender for Endpoint Network Protection page:

<PullQuote>
"Blocking FQDNs in non-Microsoft browsers requires that QUIC and Encrypted Client Hello be disabled in those browsers." -- Microsoft Defender for Endpoint, *Network Protection* [@mde-netprot]
</PullQuote>

That sentence is the encryption ceiling in Microsoft's own words. The product can block by 5-tuple (IP, port, protocol). It cannot block by hostname inside an Edge tab over QUIC unless QUIC is disabled in that browser. The limit is information-theoretic: a kernel filter without the session keys cannot read the encrypted payload. No engineering changes in WFP can lift it. The fix lives in the browser or in a user-mode TLS-inspecting proxy.

### The offload ceiling

The second ceiling came from hardware. Modern NICs do work that the kernel used to do, because doing it in hardware is faster. UDP Receive Segment Coalescing Offload, the marquee feature of NDIS 6.89 in Windows 11 24H2, is the cleanest example: "URO enables network interface cards (NICs) to coalesce UDP receive segments. NICs can combine UDP datagrams from the same flow that match a set of rules into a logically contiguous buffer. These combined datagrams are then indicated to the Windows networking stack as a single large packet" [@uro].

The "logically contiguous buffer" is the problem. A WFP callout written against the pre-URO semantics ("one indication at `FWPM_LAYER_DATAGRAM_DATA_V4` is one UDP datagram") is silently wrong on a system where the NIC has coalesced several datagrams into one Network Buffer List. The callout that needs per-datagram inspection has to read `NDIS_UDP_RSC_OFFLOAD_NET_BUFFER_LIST_INFO` to learn the per-flow size and unfold the indication accordingly [@uro]. The mechanical bound is that work the NIC has aggregated has lost its per-packet boundary by the time the kernel sees it.

> **Note:** A callout at `FWPM_LAYER_DATAGRAM_DATA_V4` or `_V6` that assumes "one NBL = one datagram" is silently wrong on Windows 11 24H2 systems with URO-capable NICs. Read the per-flow size from `NDIS_UDP_RSC_OFFLOAD_NET_BUFFER_LIST_INFO` and iterate. The change is documented in the URO reference page [@uro], but legacy callouts written before NDIS 6.89 will need an explicit audit.

The same shape repeats for TCP segmentation offload (TSO, LSO), receive offload (LRO, GRO), and TLS / IPsec / RDMA / VxLAN / GENEVE offload. Each one moves work to hardware. Each one weakens the kernel-filter assumption that "every packet flows past every layer."

### The kernel attack surface

The third ceiling is the one that drives the CVE cadence. Every callout is a kernel module [@wfp-callouts]. Every byte that crosses the `Fwpm*` user-to-kernel boundary is a potential primitive for an elevation-of-privilege exploit [@nvd-2023-29368][@nvd-2024-38034]. CVE-2023-29368, published June 14, 2023, is a CWE-415 double-free in the WFP code path with a CVSS base of 7.0 (AV:L/AC:H/PR:L/UI:N/S:U/C:H/I:H/A:H), an exploitability sub-score of 1.0, and an impact sub-score of 5.9 [@nvd-2023-29368]. CVE-2024-38034, published July 9, 2024, is a CWE-190 integer overflow in the same family of code paths with a CVSS base of 7.8 (AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H), an exploitability sub-score of 1.8, and an impact sub-score of 5.9 [@nvd-2024-38034].

The CVSS vector difference is worth reading carefully.<Sidenote>The 2024 vulnerability's attack-complexity dropped from `AC:H` to `AC:L`. The exploitability sub-score rose from 1.0 to 1.8 over the same window. The 2024 bug is easier to weaponise [@nvd-2023-29368][@nvd-2024-38034]. Without speculating about the trend across a longer time series, the direction of travel between these two anchor CVEs is "down, not up."</Sidenote>

There is a structural variant of the same story that does not require any memory-safety bug at all. In August 2021, Forshaw published a Project Zero post titled "Understanding Network Access in Windows AppContainers." The post documents a default-WFP-policy configuration that allows certain low-privilege AppContainer processes to reach the network without any of the capability SIDs (`internetClient`, `internetClientServer`, `privateNetworkClientServer`) that the AppContainer documentation suggests are required [@forshaw-2021]. The associated Project Zero issue, 2207, was marked WontFix by Microsoft; the press coverage at SecurityAffairs reproduces the advisory body verbatim: "The default rules for the WFP connect layers permit certain executables to connect TCP sockets in AppContainers without capabilities leading to elevation of privilege... Eventually an AC process will match the 'Block Outbound Default Rule' rule if nothing else has which will block any connection attempt" [@securityaffairs-2021]. The bug is a policy composition bug, not a code bug. It exists in the way the in-box sublayers, filter weights, and default rules interact -- which is precisely the surface this article spent Section 5 explaining.

> **Key idea:** WFP's hardest limits are not engineering choices Microsoft can rewrite. They are information-theoretic (a kernel filter without session keys cannot read what is encrypted), mechanical (hardware offloads exist to amortise work the kernel filter would have done, and aggregation destroys per-packet ground truth), and structural (every callout is a kernel module, and every `Fwpm*` call crosses a user-to-kernel ABI). The BFE elevation-of-privilege CVE class is the running cost of a platform sophisticated enough to host every downstream feature Windows ships.

Three ceilings. Is there a structural fix for any of them, or is this what the platform looks like forever?

## 9. Open Problems -- Where the Engineering Lives

Six questions are live right now. None of them has a clean answer.

**QUIC inspection in the kernel.** The current best partial result is to block QUIC by 5-tuple and rely on a browser's HTTP/3 fallback to TLS over TCP, where in-box inspection still works. The Defender for Endpoint Network Protection page documents the workaround verbatim: "Blocking FQDNs in non-Microsoft browsers requires that QUIC and Encrypted Client Hello be disabled in those browsers" [@mde-netprot]. Anything deeper than 5-tuple inspection on QUIC requires a user-mode proxy that terminates the QUIC connection and re-originates it, which moves the problem out of WFP.

**Microsoft Defender for Endpoint's exact WFP-layer registration map.** Publicly undocumented. Microsoft has published the capability and the limitations [@mde-netprot] but not the precise set of `FWPM_LAYER_*` GUIDs that Network Protection registers callouts at. Community reverse engineering knows fragments. A definitive map would let third-party EDR vendors avoid sublayer-priority conflicts with Defender. Whether Microsoft publishes one is a product-roadmap question.

**The structural shape of the BFE EoP CVE class.** Is the BFE elevation-of-privilege CVE class -- CWE-415 in 2023 [@nvd-2023-29368], CWE-190 in 2024 [@nvd-2024-38034], no public impossibility theorem either way -- tail risk inherent to the platform's policy-from-user-mode-to-kernel design, or is it addressable by an architectural fix ([HVCI](/blog/wdac--hvci-code-integrity-at-every-layer-in-windows/) hardening on `fwpkclnt.sys` callout paths, bounded ABI contracts on the `Fwpm*` surface, Rust-in-Windows-kernel for new callout drivers)? The honest answer is that this is open. The integer-overflow / use-after-free class is the canonical attack surface of any user-to-kernel ABI; the question is whether Microsoft commits to a structural fix or to tail-risk-mitigation-plus-patching.

**eBPF for Windows production readiness.** Does it displace WFP for new kernel-mode network filters, or does it stay adjacent? The v1.1.0 release in March 2026 was framed as "first stable" while still labelled Pre-release [@ebpf-releases]. The same release added hard/soft permit verdicts to its accept and bind hooks, explicitly mirroring `FWPS_RIGHT_ACTION_WRITE` in WFP [@ebpf-releases]. That borrowing is a tell -- the project is converging on the WFP arbitration semantics, which suggests the long-term picture is "eBPF for Windows alongside WFP" rather than "eBPF replaces WFP." The market answer is unsettled.

**Windows Defender Application Guard's egress-isolation pattern after WDAG deprecation.** WDAG for Edge used a WFP-backed egress-isolation pattern to route browsing-container traffic out of an isolated network compartment. The WDAG product surface is being phased out -- Microsoft has documented that "Microsoft Defender Application Guard... is deprecated for Microsoft Edge for Business and will no longer be updated. Starting with Windows 11, version 24H2, Microsoft Defender Application Guard... is no longer available" [@mdag-deprecation]. The pattern's future on Windows -- in containers, virtualization-based security profiles, or some successor -- is undocumented as of the time of writing. Treat this paragraph as conjectural until Microsoft publishes a successor pattern.

**NIC offload composability with kernel firewalls.** As more pipeline elements move into the NIC -- TSO, LSO, GRO/GSO, URO [@uro], TLS offload, IPsec offload, RDMA, VxLAN, GENEVE -- the assumption that every packet flows past every WFP layer weakens. A callout that registers at `FWPM_LAYER_INBOUND_TRANSPORT_V4` may never see a packet whose transport-layer work happened entirely on the NIC. The kernel-firewall design that grew up assuming software ground truth has to renegotiate that assumption release by release. NDIS 6.89's URO is the most recent example [@ndis-689]; there will be more.

<Aside label="What 'open' means here">
"Open" in this section means engineering-open, not theory-open. There is no published impossibility theorem stating that WFP cannot be made provably safe against integer-overflow elevation-of-privilege, or that a kernel firewall cannot inspect encrypted traffic with a key-disclosure protocol, or that NIC offloads cannot be composed with kernel-side filters by sharing flow state. The practical question, in every case, is whether Microsoft and the broader Windows community invest in the structural fix or settle for tail-risk-mitigation plus patching. The answer in 2026 is "mostly the latter."
</Aside>

Six open problems. Now, how do you actually use the platform that has been the subject of this article?

## 10. The Four Ways You Touch WFP

Whether you are an administrator, a detection engineer, or a kernel driver writer, there are four canonical surfaces you actually touch. Here is the field guide.

### The diagnostic surface: `netsh wfp`

Wikipedia's WFP page notes the introduction date: "Starting with Windows 7, the netsh command can diagnose of the internal state of WFP" [@wiki-wfp]. The canonical incident-response triplet is three commands long.

> **Note:** Run these three commands, in this order, before doing anything else when a Windows host shows network-filtering behaviour you cannot explain: ```text netsh wfp show state    > state.xml netsh wfp show filters  > filters.xml netsh wfp capture start file=C:\Temp\wfp.cab :: reproduce the issue netsh wfp capture stop ``` `state.xml` is the platform's current rendered configuration: every provider, sublayer, filter, and callout currently registered. `filters.xml` lists every filter, including effective weight and action. The `.cab` from `netsh wfp capture` is the ETW-and-state bundle that goes onto a Microsoft Support case. The `netsh wfp` family has been around since Windows 7 [@wiki-wfp]; it has not had a major redesign since.

A `state.xml` from `netsh wfp show state` is an XML document with one `<item>` per filter. Each item carries a `<displayData>` element with a name and description, the layer GUID, the sublayer GUID, the weight, and the action. Reading one is a matter of pattern recognition rather than parsing. The next snippet walks the structure on a hand-pasted fragment.

<RunnableCode lang="js" title="Decoding a netsh wfp show state filter element">{`
// A real-world 'netsh wfp show state' output contains many <item> elements
// inside <filters>. The fragment below is a single filter, hand-pasted from
// a 'show state' XML dump.
const xmlFragment = \`
<item>
  <filterKey>{deadbeef-1111-2222-3333-444455556666}</filterKey>
  <displayData>
    <name>EDR-vendor outbound TCP inspect</name>
    <description>Vendor X deep-inspection callout filter</description>
  </displayData>
  <layerKey>FWPM_LAYER_ALE_AUTH_CONNECT_V4</layerKey>
  <subLayerKey>{a0192d10-aaaa-bbbb-cccc-1234567890ab}</subLayerKey>
  <weight>
    <type>FWP_UINT64</type>
    <uint64>0x4000000000000064</uint64>
  </weight>
  <action>
    <type>FWP_ACTION_CALLOUT_INSPECTION</type>
  </action>
</item>
\`;

function readFilter(xml) {
  const get = (tag) => {
    const m = xml.match(new RegExp('<' + tag + '>([^<]+)</' + tag + '>'));
    return m ? m[1].trim() : null;
  };
  return {
    name:    get('name'),
    layer:   get('layerKey'),
    subLayer:get('subLayerKey'),
    weight:  get('uint64'),
    action:  get('type'),
  };
}

console.log(readFilter(xmlFragment));
// {
//   name: 'EDR-vendor outbound TCP inspect',
//   layer: 'FWPM_LAYER_ALE_AUTH_CONNECT_V4',
//   subLayer: '{a0192d10-aaaa-bbbb-cccc-1234567890ab}',
//   weight: '0x4000000000000064',
//   action: 'FWP_ACTION_CALLOUT_INSPECTION'
// }
`}</RunnableCode>

Five fields: name, layer, sublayer, weight, action. That is what every WFP filter resolves to. Reading a hundred of them takes an afternoon.

### The administrative surface: `wf.msc`

The Microsoft Management Console snap-in is the surface most Windows users have actually clicked. Every rule created in `wf.msc` is translated by the `MpsSvc` service into a WFP filter and pushed into the BFE's MPSSVC provider sublayer over RPC, and from there into `TCPIP.SYS` in the kernel [@forshaw-2021]. The UI exposes a small fraction of the filter properties WFP actually models; advanced rule attributes (per-AppContainer SID, per-package family name, per-service hardening) live in the underlying filter only.

### The networking surface: `New-NetNat` and Hyper-V NAT switches

The PowerShell cmdlet `New-NetNat` "creates a Network Address Translation (NAT) object that translates an internal network address to an external network address" [@netnat]. Each NAT object materialises as a set of WFP filters that perform the translation. Windows containers use the same machinery for their default NAT switch. The `Get-NetNat`, `Remove-NetNat`, and related cmdlets in the `NetNat` PowerShell module are the entry point.

### The driver surface: writing a WFP callout

The WDK's "Introduction to Windows Filtering Platform Callout Drivers" page is the entry point for kernel-mode writers [@wfp-callouts]. The reference sample, `WFPSampler`, lives in the `microsoft/Windows-driver-samples` repository under `network/trans/WFPSampler`. The sample's description: "The WFPSampler sample driver is a sample firewall. It has a command-line interface which allows adding filters at various WFP layers with a wide variety of conditions. Additionally it exposes callout functions for injection, basic action, proxying, and stream inspection" [@wfpsampler]. The sample ships five components: `WFPSampler.Exe`, `WFPSamplerService.Exe`, `WFPSamplerCalloutDriver.Sys`, `WFPSamplerProxyService.Exe`, and the two libraries `WFPSampler.Lib` / `WFPSamplerSys.Lib`.

<Sidenote>If you install WFPSampler and the installer refuses to register without a reboot prompt, the README documents a workaround: run `RunDLL32 setupapi.dll,InstallHinfSection DefaultInstall 131 wfpsampler.inf` (note the `131`), and `RunDLL32 setupapi.dll,InstallHinfSection DefaultInstall 132 wfpsampler.inf` for the corresponding uninstall codepath [@wfpsampler]. The 131/132 flags suppress the reboot prompt for the in-tree sample driver.</Sidenote>

A WFP callout driver that originates kernel-mode network I/O should pair with Winsock Kernel.

<Definition term="Winsock Kernel (WSK)">
"Winsock Kernel (WSK) is a kernel-mode Network Programming Interface (NPI)" [@wsk-intro]. WSK is the modern replacement for TDI as the kernel-mode sockets API on Windows Vista and later. Microsoft's WSK introduction makes the split explicit: "Filter drivers should implement the Windows Filtering Platform on Windows Vista, and TDI clients should implement WSK" [@wsk-intro]. WFP filters traffic. WSK opens sockets from inside the kernel. The two interfaces are siblings.
</Definition>

<Spoiler kind="hint" label="Should I write a kernel driver, or can I do this in user mode?">
Before writing a callout driver, ask: does the policy need per-packet kernel visibility, or would a user-mode service that consumes ETW events from `Microsoft-Windows-WFP` and the firewall's ETW providers be enough? Most logging and detection use cases are answered by ETW. A callout driver is justified when you need to *act on* traffic (drop, redirect, modify, inspect payload), not just *observe* it. The kernel attack surface that comes with a callout, documented in Section 8, is now yours to share once you ship.
</Spoiler>

The detection-engineering surface lives in [ETW](/blog/etw-how-windows-2000s-performance-hack-became-the-edr-substr/). The two providers to know are `Microsoft-Windows-WFP` and `Microsoft-Windows-Windows Firewall With Advanced Security`. Names are not enough to do the full subject justice; the cross-reference footer below points at the dedicated ETW article in this series.

You now have a mental map of every place WFP touches a Windows host -- under the firewall UI, under IPsec, under WinNAT, under the Hyper-V vSwitch, under Defender for Endpoint, under every EDR. The FAQ disarms the last eight misconceptions.

## 11. Frequently Asked Questions

<FAQ title="Frequently asked questions">

<FAQItem question="Is WFP the firewall?">
No. WFP is the platform; the Windows Firewall (WFAS, service name `MpsSvc`) is one consumer of it. Microsoft's start page makes the relationship explicit: "Windows Firewall with Advanced Security (WFAS) is implemented using WFP" [@wfp-start]. The Base Filtering Engine service (`bfe`) hosts the user-mode side of WFP and accepts policy from `MpsSvc` over RPC [@forshaw-2021]. Two user-mode services and a kernel-mode classification path, one platform.
</FAQItem>

<FAQItem question="Does fwpkclnt.sys do the filtering?">
No. `fwpkclnt.sys` is the kernel-mode WFP client and export driver. Callout drivers link against `fwpkclnt.lib`, whose in-memory form is `fwpkclnt.sys` [@wfp-arch]. The classification path -- the code that walks sublayers and filters -- runs primarily inside `NETIO.SYS`, as Forshaw documents in his Project Zero post [@forshaw-2021]. The shorthand "`fwpkclnt.sys` is the filter engine" is common online and incorrect.
</FAQItem>

<FAQItem question="Is BFE the same as MpsSvc?">
No. BFE (service name `bfe`) is the Base Filtering Engine -- the platform service that controls WFP and plumbs configuration to other modules, including IPsec keying [@wfp-about]. `MpsSvc` is the Windows Defender Firewall service. `MpsSvc` depends on `bfe`; the dependency is not reciprocal [@forshaw-2021].
</FAQItem>

<FAQItem question="Does WFP see TLS or QUIC payloads?">
No. WFP callouts see plaintext only for non-IPsec, non-TLS payloads, or for IPsec traffic where the kernel holds the keys. TLS 1.3 and QUIC are end-to-end encrypted from a callout's perspective; the keys live in user-mode TLS libraries inside the application. Microsoft's own Defender for Endpoint Network Protection documentation acknowledges the limit: "Blocking FQDNs in non-Microsoft browsers requires that QUIC and Encrypted Client Hello be disabled in those browsers" [@mde-netprot]. Section 8 calls this the encryption ceiling.
</FAQItem>

<FAQItem question="Did SecureNAT come back?">
No. SecureNAT is an ISA Server / Forefront Threat Management Gateway concept, retired with TMG. The modern Windows-host NAT on WFP is **WinNAT**, managed by the `New-NetNat` PowerShell cmdlet [@netnat]. Windows containers use WinNAT for their default NAT switch. The original input scope that informed this article erroneously referenced "SecureNAT" as a WFP consumer; the focus-premise audit corrected it to WinNAT before drafting began.
</FAQItem>

<FAQItem question="Is WSK an acronym for 'Windows Sockets Kernel'?">
No. WSK is **Winsock Kernel**. Microsoft Learn's introduction is unambiguous: "Winsock Kernel (WSK) is a kernel-mode Network Programming Interface (NPI)" [@wsk-intro]. The two-letter prefix is "Winsock," the original Windows Sockets API brand, not "Windows Sockets."
</FAQItem>

<FAQItem question="Did the 2024 SharePoint CVE-2024-21318 break BFE?">
No. CVE-2024-21318 is a Microsoft SharePoint Server deserialization remote code execution vulnerability, unrelated to the Base Filtering Engine. The 2024 WFP elevation-of-privilege vulnerability is **CVE-2024-38034**: a CWE-190 integer overflow with a CVSS base of 7.8 [@nvd-2024-38034]. The article's source-verification stage flagged the original scope's CVE attribution error before drafting; the article tracks CVE-2024-38034 and CVE-2023-29368 as the two anchor BFE CVEs.
</FAQItem>

<FAQItem question="Can a WFP callout block QUIC?">
Only at the 5-tuple level (IP, port, protocol) before or after a connection establishes. Once a QUIC connection is up, the encryption ceiling applies and the kernel has no key for the encrypted payload [@mde-netprot]. FQDN-level blocking of QUIC over Network Protection requires QUIC to be disabled in the browser, per Microsoft's own troubleshooting guide [@mde-netprot]. Deep inspection of QUIC content from the kernel is not possible with WFP alone.
</FAQItem>

</FAQ>

---

**See also.** The Microsoft-Windows-WFP and Microsoft-Windows-Windows Firewall ETW providers are how detection-engineering teams see WFP from outside the kernel; the dedicated ETW article in this series goes deeper on the provider names, manifests, and parsing. The [Antimalware Scan Interface (AMSI)](/blog/amsi-the-100-microsecond-window-where-defender-catches-a-bas/) sits on the process-side path that complements WFP's network-side path; the two are siblings, not substitutes. And the `\Device\Ipfilterdriver` device object that this article retired in Section 3 lives in the Windows [Object Manager namespace](/blog/the-object-manager-namespace-the-hierarchical-filesystem-und/), whose architecture is the subject of the Object Manager article in this series.

<StudyGuide slug="windows-filtering-platform-the-kernel-mode-firewall-you-dont-see" keyTerms={[
  { term: "Windows Filtering Platform (WFP)", definition: "Cross-mode kernel/user-mode filtering service introduced in Windows Vista that replaced NDIS-IM, filter-hook, TDI-filter, and Winsock LSP as the in-box network filtering surface." },
  { term: "Base Filtering Engine (BFE)", definition: "The Windows service (bfe) that controls WFP and plumbs configuration to other modules. Not the same as MpsSvc." },
  { term: "Filter Engine", definition: "The core WFP component that stores filters and performs filter arbitration. Hosted in both kernel mode and user mode; kernel classification runs primarily in NETIO.SYS." },
  { term: "Shim", definition: "Kernel-mode bridge between a network-stack module and the filter engine. Vista shipped six: ALE, Transport Layer Module, Network Layer Module, ICMP Error, Discard, Stream." },
  { term: "Application Layer Enforcement (ALE)", definition: "Set of WFP layers used for stateful filtering and the only layers where filters can match on application identity (normalized file name) and user identity (security descriptor)." },
  { term: "Sublayer", definition: "Priority-ordered subdivision of a WFP filtering layer. Vendors are expected to create their own sublayer via FwpmSubLayerAdd0." },
  { term: "Filter Weight", definition: "64-bit value ordering filter evaluation within a sublayer. May be set as an explicit FWP_UINT64, generated by BFE (FWP_EMPTY), or partitioned into one of 16 high-order ranges via FWP_UINT8." },
  { term: "Callout Driver", definition: "Kernel driver registered with the filter engine that performs deep inspection, packet modification, stream modification, or data logging when a filter selects it." },
  { term: "fwpkclnt.sys", definition: "Kernel-mode WFP client / export driver. Callouts link against fwpkclnt.lib; the in-memory module is fwpkclnt.sys. Not the filter engine." },
  { term: "Winsock Kernel (WSK)", definition: "Kernel-mode sockets NPI introduced in Vista. WFP filters traffic; WSK opens sockets from inside the kernel. Replaces TDI for kernel-mode socket clients." },
  { term: "NDIS Lightweight Filter (LWF)", definition: "L2 filter driver introduced in NDIS 6.0 to replace NDIS 5.x intermediate drivers. Sees Ethernet frames before IP processing; no application identity." }
]} questions={[
  { q: "Why did the four pre-WFP hooks (NDIS-IM, filter-hook, TDI-filter, LSP) fail collectively?", a: "Each hook had a specific architectural flaw -- one callback pointer with no documented chaining for filter-hook, no application identity for NDIS-IM, a deprecated substrate for TDI, in-process bypass for LSP. Together those flaws made multi-vendor coexistence impossible, which the Pawar/Stenson 2006 WinHEC deck pinned at 12 percent of all OS crashes." },
  { q: "What is the difference between BFE and MpsSvc?", a: "BFE is the Base Filtering Engine (the WFP platform service). MpsSvc is the Windows Defender Firewall service (one consumer of the platform). MpsSvc depends on BFE; the dependency is one-way." },
  { q: "How does WFP arbitrate two filters at the same layer with the same priority?", a: "Filters live inside sublayers. Sublayers are priority-ordered; filters within a sublayer are weight-ordered. Hard Block and Hard Permit are terminal; Soft Block and Soft Permit can be overridden by a later evaluator; Block overrides Permit when nothing else terminates evaluation." },
  { q: "Why can a WFP callout not block QUIC by hostname?", a: "QUIC encrypts almost all of its control plane from the first byte using a key derived from the connection's initial secret. The kernel has no access to that key; the keys live in the application's user-mode QUIC stack. WFP can block QUIC only at the 5-tuple level. FQDN blocking requires QUIC and Encrypted Client Hello to be disabled in the browser, per Microsoft's own Network Protection documentation." },
  { q: "What is the structural reason BFE keeps producing elevation-of-privilege CVEs?", a: "Every callout is a kernel module and every Fwpm* call crosses a user-to-kernel ABI. The integer-overflow / use-after-free attack surface is intrinsic to that boundary. CVE-2023-29368 (CWE-415, CVSS 7.0) and CVE-2024-38034 (CWE-190, CVSS 7.8) are the two anchor CVEs of the class; the 2024 vulnerability is rated easier to weaponise than the 2023 one." }
]} />
