Selective VLAN trunking for VM with LACP-based backend
I just installed a new server with Windows Server 2019 Datacenter instead of using the VMware platform as I’d usually do. I want to migrate everything to Hyper-V so I can “leverage”, as they love to say in Microsoft’s keynotes, the unlimited VM instances and
because I tried Hyper-V back in WS2012R2 and I remember Windows being very efficient on it.
But it has not been at all as easy as advertised, documentation has basically different terms for what the rest of the world has, not even Cisco does this; every name sounds, or overlaps with standard terms but are really proprietary Microsoft things that aren’t
in the slightest similar and to top it off it’s all mixed since 2012R2, to 2016, to 2019, in both Windows Server and SCVMM. For instance, in this newer Windows there’s this thing called Switch Embedded Teaming, given among the other options there’s Switch
Independent Teaming, or similar, you’d think it means the physical switch, that has embedded logic to bond or load-balance links, e.g; LACP/LAG/etc, but it refers to a virtual switch that I kept reading over and over but I never got how this switch, communicated
with the other end, i.e; the VM, for the its own logic to go through. It’s extremely confusing.
I deployed so far a single WS2019 host with SCVMM2016 as a VM, I had to move the VM to the host itself because when I did it through the VMM it wasn’t smart enough to realize it’s its guest and it kept crashing itself when iSCSI connectivity was lost. Even
after that it has not been able to consistently apply settings, which at this point I’m basically guessing, and keeps losing connectivity.
In the documentation I see that a lot of push is made for users to adopt connectivity where the physical switch is none the wiser, specifically LACP to aggregate links from switch to host because all of those other method create excessive noise on the network
with their corrective measures used so the physical link works.
I can’t find any specific documentation on this, though. On vCenter is pretty easy to bond the NICs and selectively assign VLANs, as needed to VMs. I don’t know how to do this on SCVMM. I don’t know how to add a port/switch with all VLANs—it would be simply
VLAN 4095 on VMware, I don’t know, nor have found how to add a port/switch with just a handful of randomly picked VLANs either—also a completely straightforward task using vCenter, which is connected to SCVMM, BTW, but I want downsize so it’s going away.
I’m currently setting the LACP bond in NIC Teaming, outside of SCVMM, then add an External switch on Hyper-V and come back to NIC Teaming to add a few more VLANs I need. Once in SCVMM I’ve tried every combination I could think of and nothing appears to work,
my problems surrounds the Logical Switch’s uplinks, I can’t add several in LACP mode for one reason or another…and by reason, it’s just a saying, because Windows’ error messages we all know make zero sense even knowing what they mean.
I don’t want to “SDN” anything because frankly, I’d be relying way too much on an self-updating historically-unreliable operating system to keep things working and while this single host is on WS2019, the others are not, they go up to WS2016 resulting in mismatching
feature sets. Besides, everything’s already in place, working solid and standards-based as much as possible.
It’s been so frustrating I’ve been considering nested virtualization to keep VMware in control but it’d negate the very goal I’m trying to achieve. :(
Is there any documentation that’s specific to this?
I bet you think this post is about you. Don't you…don't you. ♪