Resetting expired admin password on NSX-T

So apparently it has been 90 days since the deployment of NSX-T and therefor, time for the admin password to expire ;):

password-expired

Unfortunately, this doesn’t give you the opportunity to login and then change the password (a feature I would really appreciate), but a reset is necessary. In the online documentation (https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.4/administration/GUID-8816B842-2EC4-40A8-A618-F68DB29FABD2.html) the reset is done through a reboot into single user mode of one of the appliances and reset the password.

However, in the online documentation one of the steps is to “touch” a file (which means creating a blank file), through the command:

touch /config/vmware/nsx-node-api/reset_cluster_credentials

But the directory structure doesn’t exist when booting into single mode, so an error is shown. When you let the appliance boot without touching the file, you can log in with the new password, but after a couple of minutes the password of the admin user, is reset to it’s original value. I assume that creating the “reset_cluster_credentials” file, makes the change permanent.

So, what I did in my environment is change the password according to the documentation, but without touching the file in the single user mode. Then reboot the appliance into “normal” operating mode and, within the time-out period (this is a couple of minutes), create the file throught the command:

touch /config/vmware/nsx-node-api/reset_cluster_credentials

After that, the reversal of the password does not happen again and you can log in with the newly set password.

 

 

NSX-T: Missing locale data for the locale “XX”.

So, as you may know, I am from the Netherlands (or Holland, although technically I am not, but that is a different discussion ;)). So within my lab-environment I sometimes use Dutch as the language-setting, for instance within Firefox, the browser I use(d) to configure NSX-T. But this issue also occurs in Chrome. Within IE (although I only used that to test if it was affected by this ;)), the page doesn’t load at all. When I try to open Advanced Networking & Security it keeps giving a spinning circle.

Since I started working with NSX-T I found an annoying error, when I used some pages within NSX-T. When I (for instance) went into “Advanced Networking & Security” and selected “Load Balancing”, after a couple of seconds I would get an error page with a stack trace:

error-melding-1

After a very short time (and I had to record this sequence to get the first screen), I would get the following screen:

error-melding-2

When I copies the error to the clipboard, it would give the same error as in the first screen, namely:

Missing locale data for the locale "nl".

So I switched from NL to US-EN and (as you might expect) everything works fine. Not a definitive solution, but it lets me keep playing with NSX-T without errors :).

So after looking further into this, I found out that a (re)installation of the language part within Chrome fixed the error. Within Chrome, that means just installing the language. Within Firefox this is not a possibility. It is possible to add a language as a preferred language for displaying pages:

language-1

Click on the “Choose” button and add Dutch as a webpage language:

language-2

but that didn’t solve the problem. So for now, I will be using Chrome for NSX-T…

 

Load Balancing with NSX-T – Part 1

So after I looked at the installation, the fabric and routing and switching, it is time to take a look at the higher level networking functions within NSX-T, like Load Balancing. The functionality itself has not changed very much since NSX-T (compared to V), but the way that it is consumed is different.

As described in an earlier blog (Routing with NSX-T (part 1)), NSX-T uses multiple entities within it’s fabric. Two of which being tier-0 and tier-1 gateways and load balancing can only exist on a tier-1 gateway. So when you use Load Balancing within NSX-T you have to deploy both tier-0 and tier-1 components.

As described in the aforementioned blog, the tier-0 and tier-1 gateways are running within an Edge Node (which can be either virtual or physical). The Edge Node is assigned through the use of an Edge Node Cluster, so it basically looks something like this:

Load Balancing in relatie met Edge Node

(almost ran out of colors there ;)).

So when we look at it hierarchically, we have an Edge Node Cluster which is made up of one or more Edge Nodes. Edge Nodes can contain one or more tier-0 or tier-1 Logical Routers, which are comprised out of a Distributed component and a Service component. A Service Router can be used to run a Load Balancer in it, which can contain one or multiple Virtual Servers.

And the last step is that each virtual server can use a server pool, which consists of one or more targets (in the form of an IP-address).

Mode

Load Balancing within NSX-T can be done in two ways:

  • One-Arm Load Balancing
  • Inline Load Balancing

The difference between the two is described in the NSX-T administration guide in a better way than I can, so I’ll blatantly copy this from: NSX-T 2.4 Admin Guide (including the pictures):

One-Arm Load Balancer

One-Arm LB

“In one-arm mode, the load balancer is not in the traffic path between the client and the server. In this mode, the client and the server can be anywhere. The load balancer performs Source NAT (SNAT) to force return traffic from the server destined to the client to go through the load balancer. This topology requires virtual server SNAT to be enabled.

When the load balancer receives the client traffic to the virtual IP address, the load balancer selects a server pool member and forwards the client traffic to it. In the one-arm mode, the load balancer replaces the client IP address with the load balancer IP address so that the server response is always sent to the load balancer and the load balancer forwards the response to the client.

Inline Load Balancer

Inline LB

In the inline mode, the load balancer is in the traffic path between the client and the server. Clients and servers must not be connected to the same tier-1 logical router. This topology does not require virtual server SNAT. “

(end of excerpt)

We will deploy both modes, but in this blog, we are only doing the one-arm.

So we’ll start with creating a Server Pool, containing a couple of web servers. For this, I’ll use two LAMP-stack based virtual machines, with Apache enabled and a very basic website, which will show me to which server I am connecting.

So building on the environment that I already have in place, I am going to deploy the virtual machines on the LS-Web segment. So they should be reachable within the network. And since this is the “one-arm” mode, I can place my targets anywhere I want, as long as the ip-addresses I configure in the server pool, are reachable by the load balancer.

Just for reference, the topology that we are working on:

NSX-T-Logisch

and the segment where the virtual machines are going to be deployed, which is LS-WEB. For these virtual machines I am using the IP-addresses 10.200.1.101 and 10.200.1.102.

Only difference between the two virtual machines, is the background color within their web-page and the name of the server. The first machine shows:

Server-1
And the second one shows:
Server-2
(yes, I do like playing around with colors ;)).

So within NSX-T, we can now create a server pool, consisting of the IP-address used by these virtual machines:

Add Server Pool

One thing to mention here is the “Active Monitor”. That is the method that the load balancer is using to determine the status of the members. In this case, the “default-http-lb-monitor” is selected. That will perform a simple http-get and when it gets a response, it will assume the server is available. But you can do a lot of more advanced monitoring here.

Here you can see the default monitor:

default monitor

and the additional HTTP Request and HTTP Responses you can add to a monitor (not the default of course):

When creating the server pool, we can also add members to the pool. In order to do that, we click the “Select Members” link, which leads us to a page where we can add members:

Add Pool Members

Before we can create a Virtual Server, we first have to create a Load Balancer and connect that to a tier-1 gateway. What we’ll do is create a new tier-1 gateway and connect it to the existing tier-0 gateway named “Tier-0-GW”. This way, all the routing configuration is in place.

So we create a tier-1 gateway, connected to “Tier-0-GW”:

Create Tier-1 LB

The tier-1 gateway is going to run on the edge node cluster: “Edge Cluster – RJO” and after that, we can select the edge nodes that we want to use for the load balancer, by clicking  “Set”:

Select Edge

This is not necessary. If you don’t select the nodes, it will auto-select the edge nodes to run on.

Another noteworthy part in the configuration of the load balancer, is the advertisement of the addresses:

Route advertisement

We select  “All LB VIP Routes” and that way we know that the addresses that we are going to use with the load balancer, will be known throughout the network. In our example, we will use a totally different addressing scheme, just to show the route advertisement.

Now that the tier-1 gateway is created, we can create a load balancer. The number of load balancers that can be created, is dependent on the size of the edge nodes:

load balancer sizing - 1

and the number of virtual servers that one load balancer can host, is dependent on the size of the load balancer:

load balancer sizing - 2

In our lab-environment, there are no issues here, as long as we select an edge node which is at least “medium” sized.

So time to (finally) create the load balancer:

add load balancer

And after the creation (press save), we can select one or more virtual servers:

create virtual server

As you can see, we have used an IP-address in a completely different subnet. We have selected the Server Pool we created earlier (although the name is not fully visible, trust me on this one ;)).

We select an Application Profile in which is defined how the application will behave. One option we have here, is to insert a header when the website is running on a server with multiple web-servers, that are separated by a host header.

I do like the “Sorry Server Pool”, which will be used when the Virtual Server is unavailable or maxed out. This is a functionality that NSX-V doesn’t have. We can also define how persistence is managed. With a Layer-4-TCP Virtual Server (which we are creating) this is limited to Source IP, but when we create a Layer-7-HTTP virtual server, we can also select “Cookie” as a method for persistence.

Last thing to do, is make sure that the routing distribution is not just done on the tier-1, but also on the tier-0. So we have to select the tier-0 and edit the route re-distribution settings there:

route redistribution

and add the LB VIP. Just a small check on one of the other tier-0 DR’s, to see if the route is distributed:

route

And then, trying out the load balanced web-page (pushing F5):

load balancing

(just a small section of the page, but you get the gist…)

When powering of one of the web-servers, I can see the virtual server being partially available, within the Advanced Networking & Security dashboard:

load balancing-one failure-1

and when I select one of the items that is not fully green, I can get some extra details:

load balancing-one failure-2

This is all within the Advanced Networking & Security part of NSX-T, I haven’t found a way of viewing this in the “new” interface yet.

The great part of the one-arm load balancer is that I can add a new web-server to it, put it on a totally different (but reachable) part of the network.

Next blog is about the inline load balancing (or two-arm as it is sometimes called).

Routing with NSX-T (part 3) – High Availability with ECMP and BGP

Okay, one more thing to do, get redundant! In the topology that I created, I use one Edge Node within a Cluster, so when that edge node fails, all traffic will seize. So we need redundancy.

In this post I am going to write about ECMP and combine it with BGP.

I’m not going to go into the workings of ECMP, there are a lot of excellent blogs about that. For now it is sufficient to know that ECMP stands for Equal Cost Multi Pathing and that it allows routers to use multiple paths to get from A to B and as such it is viable as a means of getting both load balancing (multiple paths, means distributing load across the paths) ánd redundancy (if one path is lost, the other path(s) can still be used). To get this redundancy it is necessary to use a dynamic routing protocol like BGP to detect lost neighbors and to remove those paths from the forwarding table.

Just to have some idea of what I’m configuring in this blog, here an overview of the topology:

topology

I am creating the tier-0 and other objects in the red square. The virtual machine in the orange square (10.200.1.11) is used to test communication.

So after the small piece of theory on ECMP, we create two (virtual) edge nodes within one edge node cluster. The creation of the edge nodes is described in a previous post: Routing with NSX-T (part 2):

edge node cluster

Special consideration around the Edge Cluster Profile. In this profile you can define the way that NSX-T handles BFD (Bidirectional Forwarding Detection). This is a protocol that can be used to detect that a peer is no longer available for forwarding and should be removed from the list.

edge node cluster profile

So when you are using physical edge nodes, the BFD Probe Interval can be reduced to 300 ms. (the minimum value), thus (combined with the default BFD Declare Dead Multiple of 3), makes for sub-second fail-over. The minimum value for BFD Declare Dead Multiple is 2, so you could even make it smaller, but be careful for false positives! You don’t want your routing to flap when a couple of BFD packets are lost, due to congestion.

Since we are using virtual machines, the default values (1000 and 3) are perfectly fine and will lead to failover of a little over 3 seconds.

So after that, I created a new tier-0 gateway, to run on the created edge node cluster. First time around, I used Active/Standby as the HA Mode and I connected it to the edge cluster I just created (consisting of two edge nodes):

tier-0-rjo-ha

Within the tier-0, I connected to a segment, with subnet 10.115.1.0/24 and in this segment, two virtual machines are running, 10.115.1.11 and 10.115.1.12.

I created two up-links to the rest of the network:

tier-0-uplinks

 

Please note, in a real-life situation, you should use two separate VLAN’s for the uplinks, but since I am limited there (and don’t have control over the physical layer), I use one and the same. One of the up-links has .241 as the last octet, the other has .242.

After the creation of the up-links, I configured and connected BGP:

BGP neigbors

 

I used the minimum timers, for a fast failover, but in a production environment it is advised to consider these values carefully). Of course, I also configured the neighbors with the correct settings, to make sure they can communicate with each other and have the same timers set.

After that, it is time to take a look at the routing tables, to see if and how our newly created segment is advertised throughout the network.

In order to get the routing tables, I enabled ssh into the edge nodes. You should be able to use the NSX Manager CLI for this, but I didn’t get the right response, so working on the edge nodes was easier for me.

To enable ssh (disabled by default and not an option in the GUI), use the (remote) console of the edge node and type in:

start service ssh

and to make sure it will start at the next reboot:

set service ssh start-on-boot

After this, we can use putty (or another tool that you wish) to ssh into the edge nodes. First we lookup the logical-router we want to query:

logical routers

We want to query the DR-Tier-0-GW Distributed Router (the tier-0 on the left side of the topology drawing), to see what routes it has to reach the virtual machines within the created segments (subnet 10.115.1.0/24).

To query the forwarding table of this DR, we use the following:

forwarding table DR

The table is truncated, but we can see that the subnet 10.115.1.0/24 is reachable through one (and only one) of the interfaces (last octet .242). The reason for this is that we used Active/Standby as the HA Mode. Later on, we’ll look at the Active/Active mode, but for now this is good enough.

Try to ping one of the virtual machines in the 10.115.1.0/24 segment, from the virtual machine in the orange square and voila:

ping from web-1

Now, as the proof of the pudding is in the eating, we’ll power down the active edge node. So, let’s find out which of the edge nodes is active. For that we can use the Advanced Networking part of NSX, to see which of the interfaces is active and on which edge node it resides:

active SR

So we can see that the active SR is running on Edge-TN-06, which is in line with the configuration. As you could see in earlier pictures and in the routing table, we are using XX.XX.XX.242. This IP-address was configured on the interface which was connected to Edge-TN-06 and that is the one that is Active here.

So time to power-off edge-tn-06 and see what happens…

When we look at the ping:

ping-na-failure

We see three missed pings, which is in line with the configure BFP timers. I did a quick test where I changed the timers from 1000 ms. to 2000 ms. (keeping 3 as the Declare Dead Multiple) and changed the BGP timers to 10 and 30 (to make sure that BGP doesn’t mark the path as failed) and then 6 pings are missed.

So when using BGP ánd BFD in combination, the lesser of the time-outs is the time it takes for the paths to be taken from the routing table. So when I configured BGP as 1 and 3 and kept BFD at 2000 ms. and 3, the time went down to 3 seconds again. Normally the BFD timers would be the lowest, since detecting failures is what this protocol is made for.

When we look at the active interfaces in Advanced Networking:

active SR-after failure

We see that Edge-TN-05 is now active and the status of 06 is unknown. And when we look at the forwarding table on the edge-node:

forwarding table DR - na failure

We can see that the forwarding table has also switched to the other interface.

So after all this, one more thing to do, do it all again, but this time, create a tier-0 which is configured for Active/Active. I will omit all steps and suffice with the last two pictures, from an Active/Active setup, with all edge-nodes running:

forwarding table DR - ecmp

(it is important to have the edge-node turned on again, otherwise the route will not appear ;)).

active SR-ecmp

So in both pictures we can see that there are two active paths for the 10.115.1.0/24 subnet.

The failover time is the same, because it will still take three seconds before the failed path is removed from the routing table:

ping-na-failure2

So after all this routing, I’ll have a look at Load Balancing and NAT within NSX-T. Stay tuned!

Routing with NSX-T (part 2)

So after the theory, it is time to create some routed networks. In order to do this, I created a design on what I want to accomplish. Part of the design is already in place, but for this blog I have created the part which is visible in the red square:

Scope

So we already have a tier-0 gateway, connected to a tier-1 gateway, which is connected to several segments. The existing tier-1 is also used as a load balancer for my three-tier-app. In the picture you can see the red and green virtual server, which are running on the Load Balancer, connected to the tier-1 gateway.

I also connected a segment to the tier-0 gateway, just to see how routing between tiers would function and as might be expected, routing is done within the hypervisor.

So time to create the new parts within the design.

First we have to create a new Edge Node. This is necessary because my existing edge node is not capable of running multiple tier-0 gateways with uplinks to the physical network. So when creating an Edge Node, it is important to have the following information handy:

  • Transport Zone the edge node is going to be connected to
  • N-VDS’s used to connect to the transport zones
  • Connectivity to the physical network (in my case, through the use of dPG’s).
  • FQDN / IP-address / Port Group for management of the edge node

After all this information is gathered, it is time to create an edge node:

TN-Edge-02 - Step 1TN-Edge-02 - Step 2TN-Edge-02 - Step 3TN-Edge-02 - Step 4TN-Edge-02 - Step 5

After the edge node is created and running, it can be added to an Edge Node Cluster (no picture taken, this is self-explanatory).

When the edge node cluster is available, we can deploy our tier-0 gateway:

Tier-0-RJO

And after that we can create an interface, which can be used as an uplink for this tier-0 to the physical network (last octet being .250).Tier-0-RJO - uplink

At this stage, the tier-0 gateway is available and new segments can be created and connected to the tier-0. As can be seen in the picture, I created two segments, connected to Tier-0-RJO (one showing):

Segment-2

As you can see, this segment is connected to Tier-0-RJO. I have added an IP-address and subnet to the segment, which will automatically create the corresponding interface to the tier-0 gateway.

Segment-2-subnet

This way I have created two segments, with the subnets 10.209.1.0/24 and 10.209.2.0/24. Both connected to the tier-0 and as such virtual machines deployed to these two segments should be able to communicate with each other:

Ping

Within NSX (V or T) there is a tool called TraceFlow which allows us to see a little further than just a response to a ping request. So to see how traffic flows from one virtual machine to the other, I used this tool.

It displays the path the packet takes:

traceflow-2

and every step along the way:

 

traceflow-1

So this basically is everything in the red square…

However, I also want to see if it is possible to communicate from this part of the network, to the part on the right. I want to use my web-server, which is connected to the segment LS-WEB.

If I try this without any further communication, it (of course) fails. There is no route-information being exchanged between both tier-0’s and also no static routes have been configured.

In order to get the communication working, I configured BGP. I have set up BGP on both tier-0’s, where I use different AS’s (65010 for Tier-0-GW and 65020 for Tier-0-RJO).

Then connect both tier-0’s as each others neighbor:

BGP-Tier-0-RJO-1BGP-Tier-0-GW-1

After that, I selected which routes need to be advertised between the tier-0’s (by default nothing is exchanged):

Route redistribution

and after that, Bob’s your uncle:

Ping-2

Just to see how the path between the two virtual machines is, I used TraceFlow one more time, pretty impressive flow, for such a small environment 😉 :

traceflow-3

So that is routing within NSX-T. Next up, Load Balancing…

Routing with NSX-T (part 1)

As someone who has worked with NSX-V for quite some time, routing within NSX-T is a little bit difficult to grasp. The terminology might be only slightly different, the concepts are completely different.

Within NSX-V, you had E-W routing, through a DLR and N-S routing through one or more ESG’s, either in HA or in ECMP. That was all pretty straight forward. The addition of statefull service could only be performed by ESG’s and if so, the ESG’s would not be able to be part of an ECMP based routing-engine.

All pretty easy concepts.

Within NSX-T this is different. There are multiple ways to look at routing (and stateful  services). There are a couple of topics to talk about, all relevant to routing within NSX-T. Let’s review the topics:

  • Tiering: Tier-0 and Tier-1
  • Router Types: Logical Router / Service Router / Distributed Router
  • Edge Node / Edge Node Cluster

Tiering
First let’s look at the tiers. Within NSX-V you had the possibility to create a multi-tier routing entity, for instance if you where a service provider or had other reasons to have multiple tiers. All was done through the use of multiple ESG’s, put together into a hierarchical way.

Within NSX-T you get two tiers, but they are explicitly defined within the product. So you can easily deploy your routing in a tiered fashion. It is however not necessary to do this, if you want to, you can use tier-0 for routing. However within tier-0 there is no place for services Load Balancing, so if you want to do some load balancing, you have to add some tier-1 entities. NAT can be done on both tier-0 and tier-1, VPN is done on tier-0.

So essentially, you deploy one or more tier-0 gateways (the new name from 2.4 and onwards) and after that connect one or multiple tier-1 gateways to that tier-0 gateway.

The routing is then done in a hierarchical way. The tier-1 has to be connected to a tier-0 and this will lead to an (automatically) created tier-transit-link, which will be used to exchange routing information between the tiers.

Router Types
So within NSX-T we get some distinct router types. The following entities can be distinguished:

  • Logical Router
  • Distributed Router
  • Service Router

The Logical Router (LR) is basically a combination of the Distributed Router (DR) and the Service Router (SR). The Distributed Router, like the DLR in NSX-V, is responsible for routing traffic in a distributed (duh) fashion. This can be either on tier-0 or tier-1 (or a combination of the two). So everything that can be routed within the virtual environment, will be handled by this distributed part of the logical router. This will (like with NSX-V, be performed by every hypervisor in the fabric (attached to the same Transport Zone). So a logical router can have a DR and an SR component, which are deployed on different parts of the underlying infrastructure. The two components work together.

The Service Router is an entity that will handle all the stateful services, like load balancing, NAT and VPN. This is an instance that is not distributed, but it will run on an Edge Node, where it will service all that use it. The Service Router can either be combined with a tier-0 and a tier-1 gateway, dependent on the type of service.

Edge Nodes (Clusters)
Edge Nodes are physical or virtual machines where Service Routers can be placed in. It is a decoupling of the “physical” and  “logical” part of the functionality. Within V each ESG was represented by a virtual machine (or two, when using HA). Within T, you do need an Edge Node, but within that Edge Node you can create one or multiple tier-0 and tier-1 gateways, each with there SR-based functionality. There are some limits when it comes to the amount of tier-0 and tier-1 gateways that can be placed on a single Edge Node, but I will not touch that, since this might change in future releases. Please look at the VMware maximums guides, to see the actual limits.

The Edge Nodes can be physical or virtual and when virtual they come in different sizes (small, medium, large), with different limits and features.

The Edge Nodes can be combined into Edge Clusters, thus giving redundancy and balancing of functionality. When creating tier-0 or tier-1 gateways, you select the Edge Node Cluster to which it will be deployed. This can be changed though, so it’s not set in stone.

So, that describes the theory, in the next post, I will be deploying routing within my test-environment.

NSX-T 2.4 – Setup (1) – Transport Nodes

So, as you might know, I work for a company called PQR (www.pqr.com). Within PQR it is important for us to be able to show our customers all the functionality that they might be interested in, before implementing it themselves. This started of with something called the PQR Experience Center (or PEC for short), which was mainly focussed on the workspace part of IT.

In it, we are able to show customers how they can use workspace components within their environment. If they want to know how a specific application would perform on NVidia Grid cards, without actually purchasing them, they can try it out in the PEC.

After the huge success of the PEC on the workspace part, we have also added an SDDC part to our PEC. In it, we can show SDDC-related functionality to our customers. My role in this is that I set up the PEC on a couple of physical servers and create multiple nested environments to show the (mainly) VMware products to customers. We have also leveraged the PEC when doing demos on stage, for instance on the NLVMUG  last year and on vNSTechCon, last september.

Because of the flexibility of this environment, it is easy to add a new NSX-T (nested) environment to our PEC and that is what I did. This post describes the way this is set-up.

Basics
So I started of with a new environment, consisting of one vCenter Server and four hosts. The hosts are installed based on the efforts of William Lam, which I owe a huge debt of gratitude to. It is very easy to role out a new environment. Check out https://www.virtuallyghetto.com/2018/04/nested-esxi-6-7-virtual-appliance-updates.html if you are interested.

In order to connect your hosts to the NSX-T network, you need the following items:

  • N-VDS
  • Transport Zone(s)
  • Transport Node Profile
  • Uplink Profile
  • Underlay-network

N-VDS
N-VDS is a new construct which comes with NSX-T. It is like a distributed switch within vSphere, but managed entirely through NSX-T.  Because of the fact that NSX-T supports multiple entities, like hypervisors, clouds and so on, it was necessary to use a construct that was not bound to vSphere. So an N-VDS can be created and connected to ESXi hosts, KVM-hosts, bare-metal servers and clouds.

N-VDS’s use uplinks to connect to the physical world. The creation of an N-VDS is done when creating a Transport Zone and the name cannot be changed afterwards (or at least I have not found a way yet).

Transport Zones
Transport Zones are like they are in NSX-V, boundaries for Logical Switches (Segments).

A Transport Zone can contain segments (overlay or vlan) and nodes connected to that transport zone can use the segments that belong to that transport zone.

Transport Zones come in two flavors. Overlay and VLAN.

You can create an Overlay transport zone, which will be based on overlaying (duh). This is comparable to the Transport Zones you create in NSX-V, which will contain logical switches (in NSX-T, starting with version 2.4, they are called Segments).

The second flavor is a VLAN-based transport zone. That way you can leverage VLAN-tagging on the underlying (physical( network, as you would with any normal port group within vSphere. This construct was added because with NSX-T you can’t microsegment virtual machines (or other entities) if they are not connected to an N-VDS.

I created two transport zones. One Overlay and one VLAN-based Transport Zone and I added them all to the same N-VDS (which is called NVDS-Overlay (yeah I know, that could have been named better ;)).

You connect the Transport-Zone to an N-VDS and that connection cannot be changed.

Transport Node Profile (only when using vCenter Server)
This describes the manner in which the Transport Node (host) will be connected to the NSX-T network. In it, you can define the connected Transport Zones, N-VDS’s and the profiles used to connect the transport nodes (host) to the physical network.

As you can see, in this profile it is possible to determine the method of how to connect to the physical network. It is possible to add multiple N-VDS’s, with each it’s own configuration.

This is only used when connecting a cluster within vCenter to NSX-T, in order to create a consistent configuration. When adding stand-alone hosts, this profile is not used.

IP Assignment and IP Pool
This is used to define the way in which the Tunnel EndPoints (TEP) get their IP-address. I have chosen to use IP pool for this, since there is no DHCP present in the network in which the TEP’s are connected.

Uplink Profile
This is used to determine in which way the hosts are connected to the physical network. Select the number of uplinks, the teaming policy (Failover or Load Balancing) and the transport VLAN. MTU size will default to 1600, but it is possible to change that as well:

Uplink Profile

So after all this is done, it is time to connect the hosts to the NSX-T network. Originally I connected four hosts, now I am adding a fifth, but this one is not connected to a vCenter Server, but created as stand-alone, to demonstrate that it is not necessary to be connected to a vCenter Server to use NSX-T. I might connect the host later on, but for now, this seems like a nice opportunity to test the independence of vCenter, when it comes to NSX-T.

The first four hosts were called nsx-t-aX (with X being a number from 1 to 4), this one is called nsx-t-b1. I did have to update it to the proper ESXi build, since there are some requirements for NSX-T 2.4:

Interoperability

Adding node to NSX-T

After this is checked, we can add add a node to NSX-T, through the Host Transport Nodes page, under System | Fabric | Nodes:

Add Host Transport Node

After that, the wizard will help us add the transport node to NSX-T:

Add Host Transport Node - 2

Add Host Transport Node - 3

And it will start installation and configuration at this point:

Add Host Transport Node - 4

And after this, all is well and all associated networks are available to the host, albeit with an error in the host-based interface:

Standalone host - after installation

But when I deploy a new virtual machine, to an overlay-based network, I can:

New VM - network

Also, when powering on the virtual machine and giving it a correct IP-configuration , I can ping the rest of the network. Switched and routed:

web-server in the same subnet (other host):web-server in the same subnet (other host)
gateway (dr-instance on same host):
gateway (dr-instance on same host)
app-server in different subnet (other host):
app-server in different subnet (other host)
physical windows server (through north-south routing):
physical windows server (through north-south routing)

I chose to change the existing networking connections during the installation, but if you don’t want to, you can migrate vmkernel ports afterwards. I did write another blog about that: Migrating from Standard Switch to NVDS with NSX-T 2.4.

Next time I will write a blog about routing, since that has seen some major changes from V to T. Enough to make your head spin :).