Quantcast
Channel: Networking – Daniel's Tech Blog
Viewing all 47 articles
Browse latest View live

Convert Standard Switch to Logical Switch in System Center 2016 TP3 VMM

$
0
0

How often have I rebuild the Hyper-V Switch configuration in the last few years at customer sites? I cannot remember. When ever I implemented VMM or checked the VMM installation at a customer I designed or redesigned the Logical Network configuration and with that the Logical Switch configuration. In System Center 2012 R2 VMM you had to do the following to replace the Standard Switch on a Hyper-V host with a Logical Switch.

  1. Enable the maintenance mode on the Hyper-V host
  2. Delete any vNICs of the management OS
  3. Delete the Standard Switch
  4. Delete the network team
  5. Deploy the Logical Switch
  6. Disable the maintenance mode on the Hyper-V host

Finally in System Center 2016 TP3 VMM and it was also included in TP2 too, we got the option to convert a Standard Switch to a Logical Switch. At my first MVP Summit in 2013 I provided the following feedback to Vijay Tewari from the VMM PG.

Recognize a Standard Switch on a Hyper-V host with the same settings as a defined Logical Switch as a Logical Switch when you add the Hyper-V host to VMM and not as a Standard Switch.

The convert option maybe the outcome of this feedback and I am happy that we got it.

Before I get to the conversion process I would like to talk about the requirements. First of all you have to assign the Logical Networks to the interface on which the Standard Switch is bound to in the properties of the Hyper-V host.

image

Second the bandwidth mode of the Standard Switch and the Logical Switch must be identical.

Go into the properties of the Hyper-V host and click on Virtual Switches. If you have a Logical Switch that matches the settings of the Standard Switch you will see the Convert to Logical Switch button.

SwitchConvert01

A small window pops up then and you can choose the Logical Switch and the uplink port profile to use.

SwitchConvert02

You can have a look at the job entry if the conversion was successful.

SwitchConvert03

Go back into the Hyper-V host properties and you will see under the Virtual Switches section that the Standard Switch is now a Logical Switch.

SwitchConvert04

Der Beitrag Convert Standard Switch to Logical Switch in System Center 2016 TP3 VMM erschien zuerst auf Daniel's Tech Blog.


Working with vNIC Consistent Naming in System Center 2016 TP3 VMM

$
0
0

Windows Server 2016 TP3 Hyper-V includes a very interesting feature that got several names out there vCDN, Consistent Naming of Virtual Network Adapters and Virtual Adapter Identification. Only the last two are a good wording for it, because it is not Consistent Device Naming.

In this blog post I use the following wording vNIC Consistent Naming regarding to the TechNet Library description and will show you the benefit of this feature.

-> https://technet.microsoft.com/en-us/library/dn705350.aspx#BKMK_networking

First of all you can use the feature even without VMM. All you need is the Hyper-V Manager, but I’m going to show you it in VMM.

When you deploy a VM with VMM you get three options for the so called device properties in the settings of each vNIC.

vCDN01

Device propertyDescription
Do not set adapter nameNothing happens
Set adapter name to name of VM networkThe vNIC Consistent Naming property will have the same name as the connected VM network.
Set custom adapter nameThe vNIC Consistent Naming property will have the name you specify.

In this example I have chosen the second option. After the successful VM deployment log in into the VM and start the PowerShell console. Type in Get-NetAdapterAdvancedProperty and execute the PowerShell cmdlet.

vCDN02

As you can see the yellow marked row represents our vNIC Consistent Naming property. You may also have recognized that the vNIC name is Ethernet as it is always in Windows. You see it is not Consistent Device Naming, because CDN affects the vNIC name and that is not the case with vNIC Consistent Naming.

Why you should use vNIC Consistent Naming for your deployments? Imagine you have a VM with more than one vNIC. If you are depending on post deployment scripts that are responsible for the vNIC configuration, then the vNIC Consistent Naming is a huge time saver. Because you can easily identify which vNIC is used for their appropriate purpose and configure it.

Have a look at the next screenshot as an example. I’m using the following PowerShell cmdlets to isolate the advanced vNIC property and the next one takes it to the level where you get the corresponding vNIC through the advanced vNIC property.

Get-NetAdapterAdvancedProperty|Where-Object {$_.DisplayValue –eq “Management”}

Get-NetAdapterAdvancedProperty|Where-Object {$_.DisplayValue –eq “Management”}|Get-Netadapter

vCDN03

If you want to know which device properties setting the VM is using, you can use this small PowerShell script.

$VM=Get-SCVirtualMachine -Name “NC-01.neumanndaniel.local”
$vNIC=Get-SCVirtualNetworkAdapter -VM $VM[0]
$vNIC.DevicePropertiesAdapterNameMode
$vNIC.DevicePropertiesAdapterName

vCDN04

The next idea that will truly pops up will be to change the device properties setting. But that will end up in an error message.

Set-SCVirtualNetworkAdapter -DevicePropertiesAdapterNameMode Custom -DevicePropertiesAdapterName “NetworkController” -VirtualNetworkAdapter $vNIC

Set-SCVirtualNetworkAdapter : Cannot configure the DevicePropertiesAdapterNameMode of the virtual network adapter ‘NC-01.neumanndaniel.local’
because it is not a valid synthetic virtual network adapter of a Generation 2 VM Template or Hardware Profile. (Error ID: 50098)

Provide a valid synthetic virtual network adapter of a Generation 2 VM Template or Hardware Profile.

Currently the VMM cmdlet does not support it and I have not tested it yet if it is possible with the Hyper-V Manager.

Be aware of that you can only specify the device properties during the VM deployment progress. When you are creating your VM template you cannot set the device property through the UI.

vCDN05

You have to use PowerShell instead, because you do not want to decide or remember every time you deploy a VM to set the device properties.

$VMTemplate=Get-SCVMTemplate -Name “Network Controller”
$vNIC=Get-SCVirtualNetworkAdapter -VMTemplate $VMTemplate
Set-SCVirtualNetworkAdapter -DevicePropertiesAdapterNameMode Custom -DevicePropertiesAdapterName “Southbound” -VirtualNetworkAdapter $vNIC

When you start the VM deployment process again the device properties are now set!

vCDN06vCDN07

I totally recommend to use this feature every time you deploy a VM with more than one vNIC. Especially when you are deploying NVGRE gateways in a SDN scenario.

Der Beitrag Working with vNIC Consistent Naming in System Center 2016 TP3 VMM erschien zuerst auf Daniel's Tech Blog.

Manage Windows Server 2016 TP3 Network Controller with System Center 2016 TP3 VMM

$
0
0

Actually in the Technical Preview 3 you can use either PowerShell or System Center 2016 VMM to manage a Network Controller instance. I had not the time yet to test the management capabilities provided by System Center 2016 Operations Manager.

You can manage your datacenter network with Network Controller by using management applications, such as System Center Virtual Machine Manager (SCVMM), and System Center Operations Manager (SCOM), because Network Controller allows you to configure, monitor, program, and troubleshoot the network infrastructure under its control.

-> https://technet.microsoft.com/en-us/library/dn859239.aspx

During my tests with VMM I have discovered that you can bring Logical Networks, Logical Subnets, IP Pools, VM Networks and Logical Switches under the Network Controller management via the VMM console.

Before you start connect the Network Controller instance with the VMM.

-> http://www.danielstechblog.de/connect-windows-server-2016-tp3-network-controller-with-system-center-2016-tp3-vmm/

When you create a new Logical Network you can decide if it will be under the management of the Network Controller instance or not. Furthermore you can specify if it is a public or private IP address network. Another important setting that should be set is “Allow new VM networks created on this logical network to use network virtualization”. Why? Because the intention behind Network Controller is Software Defined Networking.

ManagedbyNC01

After you have created the Logical Network you can create an IP Pool and the IP Pool will be automatically under the management of the Network Controller instance, because of the inheritance through the parent object. The same applies to a VM Network and its associated IP Pools.

ManagedbyNC02ManagedbyNC03

Make sure you choose “Isolate using Hyper-V network virtualization” as Isolation option otherwise the VM Network creation fails.

ManagedbyNC04

Now you can create the Logical Switch or even before the Uplink Port Profile. During the Logical Switch creation process you can decide if it will be under the management of the Network Controller instance or not.

ManagedbyNC05ManagedbyNC06

One important notice here, you cannot create any vNIC Adapter on a Logical Switch that is under the management of a Network Controller instance.

ManagedbyNC07

When you now have a look into your Network Controller properties you will see the created Logical Network under Logical Network Affinity.

ManagedbyNC08

As I mentioned it earlier you can use PowerShell as well, but you do not want that. Why? Let me show you an example creating a Logical Network, a Logical Subnet and an IP Pool through PowerShell.

#New IP Pool Definition
$resourceIdIPPool=[system.guid]::NewGuid()
$StartIPAddress="10.1.0.1"
$EndIPAddress="10.1.0.254"
$IPPool = [Microsoft.Windows.NetworkController.IpPool] @{}
$IPPool.ResourceMetadata = @{}
$IPPool.resourceId = $resourceIdIPPool
$IPPool.properties = [Microsoft.Windows.NetworkController.IpPoolProperties] @{}
$IPPool.properties.startIpAddress = $StartIPAddress
$IPPool.properties.endIpAddress = $EndIPAddress
$IPPool.ResourceMetadata.ResourceName = "Network Controller IP Pool"
#Logical Network Subnet Definition
$resourceIdSubnet=[system.guid]::NewGuid()
$IPPools=$IPPool
$LogicalNetworkSubnet = [Microsoft.Windows.NetworkController.LogicalSubnet] @{}
$LogicalNetworkSubnet.ResourceMetadata = @{}
$LogicalNetworkSubnet.resourceId = $resourceIdSubnet
$LogicalNetworkSubnet.Properties =[Microsoft.Windows.NetworkController.LogicalSubnetProperties] @{}
$LogicalNetworkSubnet.Properties.AddressPrefix = "10.1.0.0/24"
$LogicalNetworkSubnet.Properties.VlanID = 0
$LogicalNetworkSubnet.Properties.IpPools=$IPPools
$LogicalNetworkSubnet.ResourceMetadata.ResourceName="10.1.0.0/24"
#Logical Network Definition
$resourceIdNetwork=[system.guid]::NewGuid()
$LogicalNetworkSubnets=$LogicalNetworkSubnet
$LogicalNetwork = [Microsoft.Windows.NetworkController.LogicalNetwork] @{}
$LogicalNetwork.resourceId = $resourceIdNetwork
$LogicalNetwork.properties =[Microsoft.Windows.NetworkController.LogicalNetworkProperties] @{}
$LogicalNetwork.ResourceMetadata = @{}
$LogicalNetwork.properties.subnets = $LogicalNetworkSubnets
$LogicalNetwork.properties.networkVirtualizationEnabled = "True"
$LogicalNetwork.ResourceMetadata.ResourceName="Network Controller"
$ConnectionUri="https://vmm-cdm.azure.local"
New-NetworkControllerLogicalNetwork -ResourceId $resourceIdNetwork -Properties $LogicalNetwork.properties -ResourceMetadata $LogicalNetwork.ResourceMetadata -ConnectionUri $ConnectionUri -Force -Verbose
New-NetworkControllerLogicalSubnet -LogicalNetworkId $resourceIdNetwork -ResourceId $resourceIdSubnet -Properties $LogicalNetworkSubnet.Properties -ResourceMetadata $LogicalNetworkSubnet.ResourceMetadata -ConnectionUri $ConnectionUri -Force -Verbose
New-NetworkControllerIpPool -NetworkId $resourceIdNetwork -SubnetId $resourceIdSubnet -ResourceId $resourceIdIPPool -Properties $IPPool.Properties -ResourceMetadata $IPPool.ResourceMetadata -ConnectionUri $ConnectionUri -Force –Verbose

As you can see it is a lot of work and actually the Remove PowerShell cmdlets do not seem to work in TP3.

Der Beitrag Manage Windows Server 2016 TP3 Network Controller with System Center 2016 TP3 VMM erschien zuerst auf Daniel's Tech Blog.

Working with VM Update Functional Level in System Center 2016 TP3 VMM

$
0
0

In Windows Server 2016 Microsoft introduces the next VM version 6.2 as they have always done with every new Server version since Hyper-V. This time an important part is different. VMs will not get updated automatically to the newest VM version. The reason for that is the new Rolling Cluster Upgrade feature.

In short VMs with version 6.2 are not compatible with Windows Server 2012 R2 Hyper-V, but the latest VM version 5.0 under Windows Server 2012 R2 Hyper-V is compatible with Windows Server 2012 R2 and Windows Server 2016. That said you have to update your VMs to version 6.2 manually.

-> https://technet.microsoft.com/en-us/library/dn765471.aspx#BKMK_ConfgVersion

Using System Center 2016 TP3 VMM we have the alternative to do the update through the VMM console instead of using the Hyper-V PowerShell cmdlets.

First we check the VM version with PowerShell.

UpdateConfigurationVersion01

The VM has to be turned off to update the VM version! There is no difference between VMM and the native Hyper-V PowerShell cmdlets. In the ribbon bar we click on Update Functional Level to raise the VM version. We will get a warning message that this cannot be undone.

UpdateConfigurationVersion02UpdateConfigurationVersion03

Checking the VM version again the output should be 6.2.

UpdateConfigurationVersion04

If we have several VMs that we want to update, then we can use the VMM PowerShell cmdlet Update-SCVMVersion for it.

Der Beitrag Working with VM Update Functional Level in System Center 2016 TP3 VMM erschien zuerst auf Daniel's Tech Blog.

Microsoft Azure Network Security Group effective security rules evaluation

$
0
0

Ever faced the problem that you had defined rules in your Network Security Groups, attached one to the virtual subnet and the other one to the VM’s NIC and finally lost the view which rules of which NSG are applied to the VM?

If you can answer the question with yes, then Azure provides the solution for it. A hidden gem: the effective security rules.

The effective security rules evaluation can be found under the category SUPPORT + TROUBLESHOOTING in each NSG or NIC.

NSGER01

You only have to select the VM and the NIC. Now you get an overview which NSGs are associated with the VM’s NIC and which rules are applied to it.

NSGER02

For an offline analysis there is a download option, that generates a CSV file of the output.

NSGER03

Der Beitrag Microsoft Azure Network Security Group effective security rules evaluation erschien zuerst auf Daniel's Tech Blog.

Microsoft Azure Route Table effective routes evaluation

$
0
0

Another hidden gem in Azure is the effective routes evaluation.

The effective routes evaluation can be found under the category SUPPORT + TROUBLESHOOTING in each Route Table or NIC.

Effective_Route00

You only have to select the subnet and the VM’s NIC. Now you get an overview which routes are applied to the VM’s NIC.

Effective_Route02Effective_Route01

For an offline analysis there is a download option, that generates a CSV file of the output.

Effective_Route03

Der Beitrag Microsoft Azure Route Table effective routes evaluation erschien zuerst auf Daniel's Tech Blog.

Azure services URLs and IP addresses for firewall or proxy whitelisting

$
0
0

When you are working with Azure sometimes you have to whitelist specific IP address ranges or URLs in your corporate firewall or proxy to access all Azure services you are using or trying to use.

Some information like the datacenter IP ranges and some of the URLs are easy to find. Other things are more complicated to find like calling IP addresses of specific Azure services or specific URLs.

The list of Azure services specific URLs and IP addresses in this blog post is not complete and only a snapshot at the time of writing this post.

The post is divided into the following sections IP addresses, calling IP addresses and URLs.

I hope you find the summary useful and supportive for your day to day work with Azure.

IP addresses:

Datacenter IP ranges:

-> https://www.microsoft.com/en-us/download/details.aspx?id=41653

Azure CDN:

-> https://msdn.microsoft.com/library/mt757330.aspx

Calling IP addresses:

Logic App:

-> https://docs.microsoft.com/en-us/azure/app-service-logic/app-service-logic-limits-and-config#configuration

Traffic Manager:

Have a look at the section “What are the IP addresses from which the health checks originate?”.

-> https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-monitoring#faq

URLs:

PowerShell Get-AzureRmEnvironment / Azure main services URLs:

AzureEnvironment

NameValueValue 2
Active Directory Service Endpoint Resource Idhttps://management.core.windows.net/
Gallery Urlhttps://gallery.azure.com/
Management Portal Urlhttps://portal.azure.comhttps://manage.windowsazure.com
Service Management Urlhttps://management.core.windows.net/
Publish Settings File Urlhttps://manage.windowsazure.com/publishsettings/index
Resource Manager Urlhttps://management.azure.com/
Sql Database Dns Suffix.database.windows.net
Storage Endpoint Suffixcore.windows.net.blob.core.windows.net
.queue.core.windows.net
.table.core.windows.net
.file.core.windows.net
Active Directory Authorityhttps://login.microsoftonline.com/
Graph Urlhttps://graph.windows.net/
Graph Endpoint Resource Idhttps://graph.windows.net/
Traffic Manager Dns Suffixtrafficmanager.net
Azure Key Vault Dns Suffixvault.azure.net
Azure Data Lake Store File System Endpoint Suffixazuredatalakestore.net
Azure Data Lake Analytics Catalog And Job Endpoint Suffixazuredatalakeanalytics.net
Azure Key Vault Service Endpoint Resource Idhttps://vault.azure.net

Other Azure services URLs:

NameValueValue 2
Redis Cache.redis.cache.windows.net
App Service.azurewebsites.net
DocumentDBdocuments.azure.com
Azure Batchbatch.azure.com.{region}.batch.azure.com
Machine Learning Studiostudio.azureml.net
Machine Learning Galleryhttps://gallery.cortanaintelligence.com/
Machine Learning Web Service Managementservices.azureml.net
Service Bus.servicesbus.windows.net
Event Hubs.servicesbus.windows.net
Azure IoT Hub.azure-devices.net
API Management.azure-api.net
Azure Automation.azure-automation.net
Azure Automation Webhookshttps://s2events.azure-automation.net
Public IP, Azure LB, Web Application Gateway, Service Fabric, Azure Container Servicecloudapp.azure.com.{region}.cloudapp.azure.com
Azure Search.search.windows.net
Azure Analysis Servicesasazure.windows.net.{region}.asazure.windows.net
Logic Applogic.azure.com.{region}.logic.azure.com
CDN.azureedge.net
HDInsight.azurehdinsight.net

Azure Site Recovery:

-> https://docs.microsoft.com/en-us/azure/site-recovery/site-recovery-best-practices#url-access

Azure Backup:

-> https://docs.microsoft.com/en-us/azure/backup/backup-azure-backup-faq#what-firewall-rules-should-be-configured-for-azure-backup-br

Azure Log Analytics (OMS):

-> https://docs.microsoft.com/en-us/azure/log-analytics/log-analytics-proxy-firewall#configure-proxy-and-firewall-settings-with-the-microsoft-monitoring-agent

StorSimple:

-> https://docs.microsoft.com/en-us/azure/storsimple/storsimple-system-requirements#networking-requirements-for-your-storsimple-device

Data Factory – Data Management Gateway:

Have a look at “Ports and firewall” under the section “Installation”.

-> https://docs.microsoft.com/en-us/azure/data-factory/data-factory-data-management-gateway#installation

Azure MFA Server:

-> https://docs.microsoft.com/en-us/azure/multi-factor-authentication/multi-factor-authentication-get-started-server#install-and-configure-the-azure-multi-factor-authentication-server

Azure Automation Hybrid Runbook Worker:

-> https://docs.microsoft.com/en-us/azure/automation/automation-hybrid-runbook-worker#hybrid-runbook-worker-requirements

Azure KMS server:

-> https://blogs.technet.microsoft.com/supportingwindows/2015/05/20/use-azure-custom-routes-to-enable-kms-activation-with-forced-tunneling/

Der Beitrag Azure services URLs and IP addresses for firewall or proxy whitelisting erschien zuerst auf Daniel's Tech Blog.

Azure Germany services URLs and IP addresses for firewall or proxy whitelisting

$
0
0

When you are working with Azure Germany sometimes you have to whitelist specific IP address ranges or URLs in your corporate firewall or proxy to access all Azure services you are using or trying to use.

Some information like the datacenter IP ranges and some of the URLs are easy to find. Other things are more complicated to find like calling IP addresses of specific Azure services or specific URLs.

The list of Azure Germany services specific URLs and IP addresses in this blog post is not complete and only a snapshot at the time of writing this post.

The post is divided into the following sections IP addresses, calling IP addresses and URLs.

I hope you find the summary useful and supportive for your day to day work with Azure Germany.

IP addresses:

Datacenter IP ranges:

-> https://www.microsoft.com/en-us/download/details.aspx?id=54770

Calling IP addresses:

Traffic Manager:

The Traffic Manager health checks are originated in Azure Germany from the following IP addresses.

  • Germany Central: 51.4.144.143
  • Germany Northeast: 51.5.144.82

URLs:

PowerShell Get-AzureRmEnvironment / Azure main services URLs:

AzureGermanyEnvironment_thumb

NameValueValue 2
Active Directory Service Endpoint Resource Idhttps://management.core.cloudapi.de/
Gallery Urlhttps://gallery.cloudapi.de/
Management Portal Urlhttps://portal.microsoftazure.de
Service Management Urlhttps://management.core.cloudapi.de/
Publish Settings File Urlhttps://manage.microsoftazure.de/publishsettings/index
Resource Manager Urlhttps://management.microsoftazure.de
Sql Database Dns Suffix.database.cloudapi.de
Storage Endpoint Suffixcore.cloudapi.de.blob.core.cloudapi.de
.queue.core.cloudapi.de
.table.core.cloudapi.de
.file.core.cloudapi.de
Active Directory Authorityhttps://login.microsoftonline.de/
Graph Urlhttps://graph.cloudapi.de/
Graph Endpoint Resource Idhttps://graph.cloudapi.de/
Traffic Manager Dns Suffixazuretrafficmanager.de
Azure Key Vault Dns Suffixvault.microsoftazure.de
Azure Data Lake Store File System Endpoint Suffix
Azure Data Lake Analytics Catalog And Job Endpoint Suffix
Azure Key Vault Service Endpoint Resource Idhttps://vault.microsoftazure.de

Other Azure services URLs:

NameValueValue 2
Redis Cache.redis.cache.cloudapi.de
App Service.azurewebsites.de
DocumentDBdocuments.microsoftazure.de
Azure Batchbatch.cloudapi.de.{region}.batch.cloudapi.de
Machine Learning Studio
Machine Learning Gallery
Machine Learning Web Service Management
Service Bus.servicesbus.cloudapi.de
Event Hubs.servicesbus.cloudapi.de
Azure IoT Hub.azure-devices.de
API Management
Azure Automation
Azure Automation Webhooks
Public IP, Azure LB, Web Application Gateway, Service Fabric, Azure Container Serviceazurecloudapp.de.{region}.azurecloudapp.de
Azure Search
Azure Analysis Services
Logic App
CDN
HDInsight.azurehdinsight.de

Azure KMS server:

Even for Azure Germany you can use the following blog article. The only difference is that the KMS server in Azure Germany has the following FQDN and IP address.

  • FQDN: kms.core.cloudapi.de
  • IP address: 51.4.143.248
  • Port: 1688

-> https://blogs.technet.microsoft.com/supportingwindows/2015/05/20/use-azure-custom-routes-to-enable-kms-activation-with-forced-tunneling/

Der Beitrag Azure Germany services URLs and IP addresses for firewall or proxy whitelisting erschien zuerst auf Daniel's Tech Blog.


Demystifying Azure VMs bandwidth specification – F-series

$
0
0

As you may know Microsoft specifies the bandwidth of Azure VMs with low, moderate, high, very high and extremely high. As Yousef Khalidi, CVP Azure Networking, has written in his blog post in March, Microsoft will provide specific numbers to each Azure VM size in April.

When our world-wide deployment completes in April, we’ll update our VM Sizes table so you can see the expected networking throughput performance numbers for our virtual machines.

https://azure.microsoft.com/en-us/blog/networking-innovations-that-drive-the-cloud-disruption/

I have run some network performance tests on each F-series VM size to get the numbers for it. For my tests I have used the NTttcp utility by Microsoft.

-> https://gallery.technet.microsoft.com/NTttcp-Version-528-Now-f8b12769

My test setup was the following:

  • send-crp1 as the sender VM with the internal IP address 10.0.0.4
  • receive-crp1 as the receiver VM with the internal IP address 10.0.0.5

Both VMs were deployed as F1 to start the tests with and were running Windows Server 2016.

NTttcp option on sender VM:

ntttcp.exe -s -m 8,*,10.0.0.5 -l 128k -a 2 -t 15

NTttcp option on receiver VM:

ntttcp.exe -r -m 8,*,10.0.0.5 -rb 2M -a 16 -t 15

Before running the tests it is advisable to disable the Windows firewall on the systems.

NetworkPerformance1

Here are the results for all F-series VM sizes. So you get an idea what you can expect, when you are reading network bandwidth is high. But keep in mind that depending on the CPU cores the network bandwidth varies.

SizeCPU coresMemory: GiBNetwork bandwidthMeasured network bandwidth
F112Moderate750 Mbit/s
F224High1,5 Gbit/s
F448High3 Gbit/s
F8816High6 Gbit/s
F161632Extremely high12 Gbit/s

The most powerful VM size regarding network bandwidth without the use of RDMA is the D15_v2 with the accelerated networking option.

NetworkPerformance2

SizeCPU coresMemory: GiBNetwork bandwidthMeasured network bandwidth
D15_v220140Extremely high24 Gbit/s

I am looking forward to the specific numbers to get published in the Azure documentation as I mentioned and quoted it at the beginning of this blog post.

-> https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes

Der Beitrag Demystifying Azure VMs bandwidth specification – F-series erschien zuerst auf Daniel's Tech Blog.

Troubleshoot Azure VPN gateways with the Azure Network Watcher

$
0
0

Earlier this year Microsoft has launched a new Azure service for network diagnostics and troubleshooting called Network Watcher.

-> https://azure.microsoft.com/en-us/services/network-watcher/

The Network Watcher offers a range of tools like VPN diagnostics and packet capturing to mention two of them. But I would like to talk about the VPN diagnostics capability in this blog post.

Before we can use the VPN diagnostics, we have to enable the Network Watcher for the specific region.

NetworkWatcherVPN01

There is only one Network Watcher instance per Azure region in a subscription.

For the next step we jump into the VPN Diagnostics section and selecting our desired VPN gateway with the corresponding connection. We also have to select a storage account to store the generated log files.

NetworkWatcherVPN02

Before we kick off the diagnostic run, we have to make sure that the VPN gateway type is supported by the Network Watcher! Currently, only route-based VPN gateway types are supported.

-> https://docs.microsoft.com/en-us/azure/network-watcher/network-watcher-troubleshoot-overview#supported-gateway-types

Now we can start the diagnostic run with a click on Start troubleshooting.

NetworkWatcherVPN03NetworkWatcherVPN04

For downloading the log files, I am using the Azure Storage Explorer.

-> http://storageexplorer.com/

NetworkWatcherVPN06

The log files are sorted by date and time of the latest run and will be placed in a .zip file.

Our first run was a healthy one. So the .zip file contains two files.

ConnectionStats.txt

Connectivity State : Connected
Remote Tunnel Endpoint :
Ingress Bytes (since last connected) : 10944 B
Egress Bytes (Since last connected) : 10944 B
Connected Since : 6/28/2017 7:01:49 AM

CPUStat.txt

Current CPU Usage : 0 %
Current Memory Available : 595 MBs

To force a run that shows the VPN gateway in an unhealthy state, I have edited the PSK on one side. So the PSK does not match anymore.

NetworkWatcherVPN05

Now we get additional files with the .zip file. Beside the ConnectionStats.txt and the CPUStat.txt, we got IKEErrors.txt, Scrubbed-wfpdiag.txt, wfpdiag.txt.sum and wfpdiag.xml. The most important ones are IKEErrors.txt and Scrubbed-wpfdiag.txt.

IKEErrors.txt

Error: Authenticated failed. Check keys and auth type offers.
based on log : Peer sent AUTHENTICATION_FAILED notify
Error: Authentication failed. Check shared key. Check crypto. Check lifetimes.
based on log : Peer failed with Windows error 13801(ERROR_IPSEC_IKE_AUTH_FAIL)
Error: On-prem device sent invalid payload.
based on log : IkeFindPayloadInPacket failed with Windows error 13843(ERROR_IPSEC_IKE_INVALID_PAYLOAD)

Scrubbed-wfpdiag.txt


[0]0368.0D7C::06/28/2017-11:46:45.651 [ikeext] 13|51.5.240.234|Failure type: IKE/Authip Main Mode Failure
[0]0368.0D7C::06/28/2017-11:46:45.651 [ikeext] 13|51.5.240.234|Type specific info:
[0]0368.0D7C::06/28/2017-11:46:45.651 [ikeext] 13|51.5.240.234|  Failure error code:0x000035e9
[0]0368.0D7C::06/28/2017-11:46:45.651 [ikeext] 13|51.5.240.234|    IKE authentication credentials are unacceptable

The IKEErrors.txt file gives us an overview what maybe wrong and we can start checking those settings. For a better troubleshooting in more details we have to take a look into the Scrubbed-wfpdiag.txt file. As quoted out from the file we got the exact information that something is wrong with the provided credentials also known as our PSK.

As you can see the Network Watcher is an easy to use SaaS service providing you with the necessary tool set to diagnose and troubleshoot network issues and misconfigurations in your Azure environment.

Der Beitrag Troubleshoot Azure VPN gateways with the Azure Network Watcher erschien zuerst auf Daniel's Tech Blog.

Working with NSG augmented security rules in Azure

$
0
0

At Microsoft Ignite this year Microsoft has announced several networking improvements and features in Azure. Most of them are currently in public preview and can be tested like the augmented security rules for NSGs in Azure.

-> https://azure.microsoft.com/en-us/updates/public-preview-features-for-nsgs/

What are augmented security rules? In short, they extend the rule set, so you can specify more than one IP address or IP address space or a combination of both for the “Source IP addresses/CIDR ranges” or “Destination IP addresses/CIDR ranges” options.

-> https://docs.microsoft.com/en-us/azure/virtual-network/security-overview#augmented-security-rules

Let us have a look at one example. You would like to restrict internet access for the VMs in one specific subnet, but the VMs should be able to communicate with the Azure datacenter IP ranges.

-> https://www.microsoft.com/en-us/download/details.aspx?id=41653

In this example we have a look at the Azure region East US with 424 IP ranges as of today. Before augmented security rules you had to create 424 rules to leverage them all. This can be cumbersome in some situations, because the NSG rule limit has a max of 500. So, with the augmented security rule you can combine the 424 rules into one.

nsgaugmented01

Here is an example on how you can read in the Azure datacenter IP ranges xml file and create an Azure NSG with the specific region IP ranges with PowerShell.

$azureRegions=Get-Content .\PublicIPs_20171031.xml
$filter="useast"
$selectedRegion=($azureRegions.AzurePublicIpAddresses.Region|Where-Object {$_.Name -eq $filter}).IpRange.Subnet
$ruleName="Azure-region-"+$filter
$ruleDescription="Allow Azure region "+$filter
$rules = New-AzureRmNetworkSecurityRuleConfig -Name $ruleName -Description $ruleDescription `
-Access Allow -Protocol Tcp -Direction Outbound -Priority 1000 `
-SourceAddressPrefix Internet -SourcePortRange * `
-DestinationAddressPrefix $selectedRegion -DestinationPortRange * -Verbose
New-AzureRmNetworkSecurityGroup -Name $ruleName -ResourceGroupName "augmented-security-rules" -Location eastus -SecurityRules $rules -Verbose

Another example with PowerShell where you provide the IP address ranges directly.

#Create new rule with new NSG
$rules = New-AzureRmNetworkSecurityRuleConfig -Name augmented-rule -Description "Allow RDP" `
-Access Allow -Protocol Tcp -Direction Inbound -Priority 100 `
-SourceAddressPrefix Internet -SourcePortRange * `
-DestinationAddressPrefix "172.16.0.0/24","10.0.0.0/24" -DestinationPortRange 3389 -Verbose
New-AzureRmNetworkSecurityGroup -Name RDP -ResourceGroupName "augmented-security-rules" -Location eastus -SecurityRules $rules -Verbose

The important part is the specification of the address ranges. If you are providing them directly, make sure that the format is “range1″,”range2”. On the other hand, if you are using variables, make sure the variable is an array.

Finally use the latest Azure PowerShell version and have fun trying out NSG augmented security rules.

Der Beitrag Working with NSG augmented security rules in Azure erschien zuerst auf Daniel's Tech Blog.

Deploy NSG augmented security rules with Azure Resource Manager templates

$
0
0

In my previous blog post “Working with NSG augmented security rules in Azure” I described what the NSG augmented security rules are and how you can leverage them with PowerShell.

-> http://www.danielstechblog.info/working-nsg-augmented-security-rules-azure/

In this blog post I will briefly describe how to implement the augmented security rules in your Azure Resource Manager template. First, let us have a look at the standard security rule definition we are familiar with.

"resources": [
{
"apiVersion": "2017-10-01",
"type": "Microsoft.Network/networkSecurityGroups",
"name": "value",
"location": "[resourceGroup().location]",
"properties": {
"securityRules":[
{
"name": "value",
"properties":{
"description": "value",
"protocol": "Tcp",
"sourcePortRange": "value",
"destinationPortRange": "value",
"sourceAddressPrefix": "value",
"destinationAddressPrefix": "value",
"access": "Allow",
"priority": 100,
"direction": "Outbound"
}
}
]
}
}
]

The thing is with the augmented security rules, that we must provide for the properties sourcePortRange, destinationPortRange, sourceAddressPrefix and destinationAddressPrefix the plural versions sourcePortRanges, destinationPortRanges, sourceAddressPrefixes and destinationAddressPrefixes. Furthermore, we must provide the values for these properties as an array otherwise the deployment will fail.

"resources": [
{
"apiVersion": "2017-10-01",
"type": "Microsoft.Network/networkSecurityGroups",
"name": "value",
"location": "[resourceGroup().location]",
"properties": {
"securityRules":[
{
"name": "value",
"properties":{
"description": "value",
"protocol": "Tcp",
"sourcePortRanges": [ "value","value" ],
"destinationPortRanges": [ "value","value" ],
"sourceAddressPrefixes": [ "value","value" ],
"destinationAddressPrefixes": [ "value","value" ],
"access": "Allow",
"priority": 100,
"direction": "Outbound"
}
}
]
}
}
]

Finally, let us have a look on the same scenario I had described in my previous blog article to create a NSG augmented security rule to cover the IP range for the Azure region East US and open the ports 22, 3389 and 443.

nsgaugmented02

Template file:

{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"nsgPrefixName": {
"type": "string",
"metadata": {
"description": "NSG prefix name"
}
},
"azureRegionName": {
"type": "string",
"metadata": {
"description": "Azure Region name"
}
},
"destinationPrefix": {
"type": "array",
"metadata": {
"description": "Destination prefix"
}
},
"allowedPorts": {
"type": "array",
"metadata": {
"description": "Define allowed inbound ports"
}
}
},
"variables":{
"basePorts": [
22,
3389
],
"allPorts": "[concat(variables('basePorts'), parameters('allowedPorts'))]"
},
"resources": [
{
"apiVersion": "[providers('Microsoft.Network','networkSecurityGroups').apiVersions[0]]",
"type": "Microsoft.Network/networkSecurityGroups",
"name": "[concat(parameters('nsgPrefixName'),'-', parameters('azureRegionName'),'-nsg')]",
"location": "[resourceGroup().location]",
"properties": {
"securityRules":[
{
"name": "enabledPorts",
"properties":{
"description": "enabledPorts",
"protocol": "Tcp",
"sourcePortRange": "*",
"destinationPortRanges": "[variables('allPorts')]",
"sourceAddressPrefix": "Internet",
"destinationAddressPrefixes": "[parameters('destinationPrefix')]",
"access": "Allow",
"priority": 100,
"direction": "Outbound"
}
}
]
}
}
],
"outputs": {
"apiVersionNSG": {
"type": "string",
"value": "[providers('Microsoft.Network','networkSecurityGroups').apiVersions[0]]"
},
"enabledPorts": {
"type": "array",
"value": "[variables('allPorts')]"
}
}
}

Template parameters file:

{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"nsgPrefixName": {
"value": "Azure-region"
},
"azureRegionName": {
"value": "eastus"
},
"destinationPrefix": {
"value": []
},
"allowedPorts": {
"value": [ 443 ]
}
}
}

PowerShell deployment script:

$azureRegions=Get-Content .\PublicIPs_20171031.xml
$filter="useast"
$selectedRegion=($azureRegions.AzurePublicIpAddresses.Region|Where-Object {$_.Name -eq $filter}).IpRange.Subnet
New-AzureRmResourceGroupDeployment -Name "augmented-security-rules" -ResourceGroupName "augmented-security-rules" -TemplateParameterFile .\NSG_ASR.parameters.json -TemplateFile .\NSG_ASR.json -destinationPrefix $selectedRegion -Verbose

Der Beitrag Deploy NSG augmented security rules with Azure Resource Manager templates erschien zuerst auf Daniel's Tech Blog.

Deploying Application Security Groups with an Azure Resource Manager template

$
0
0

This month Microsoft launched the public preview of the Application Security Groups, short ASG, in all Azure regions.

-> https://azure.microsoft.com/en-us/updates/public-preview-for-asg/

ASGs are like a security group and makes it easier to define an Azure Network Security Group rule set. You can join Azure VMs or to be more specific the Azure VM’s NIC to an ASG. In the next step you would use the Application Security Group in the source or destination section of a NSG rule to configure the access. ASGs are simplifying definition and management of the NSG rule set and making it more secure, because you don’t have to specify a subnet in CIDR notation to open communication for several VMs in a subnet or using a NSG rule for each VM.

With ASGs and NSGs you can build up your security before deploying the VMs, because you are independent of the VM’s IP address. Just add the VM to the specific ASG and the NSG rule set will be applied to the VM’s NIC.

During the public preview creation and configuration of Application Security Groups is only possible via Azure PowerShell, Azure CLI and ARM templates.

After setting the context let us talk about the ARM template deployment of ASGs.

Have a look at the following snippet.

{
    "apiVersion": "2017-10-01",
    "type": "Microsoft.Network/applicationSecurityGroups",
    "name": "[parameters('asgName')]",
    "location": "[resourceGroup().location]",
    "properties": {}
}

As you can see the only configuration parameter in an ARM template is the name of the Application Security Group. That is, it.

For NSG rule configuration and NIC assignment we need to link those resources with the ASG via the Application Security Group resource id. Have a look at the following snippets.

{
    "apiVersion": "2017-10-01",
    "type": "Microsoft.Network/networkSecurityGroups",
    "name": "[parameters('nsgName')]",
    "location": "[resourceGroup().location]",
    "dependsOn": [
        "[concat('Microsoft.Network/applicationSecurityGroups/', parameters('asgName'))]"
    ],
    "properties": {
        "securityRules": [
            {
                "name": "HTTP-80",
                "properties": {
                    "description": "HTTP 80",
                    "protocol": "Tcp",
                    "sourcePortRange": "*",
                    "destinationPortRange": "80",
                    "sourceAddressPrefix": "Internet",
                    "destinationApplicationSecurityGroups": [
                        {
                            "id": "[resourceId('Microsoft.Network/applicationSecurityGroups',parameters('asgName'))]"
                        }
                    ],
                    "access": "Allow",
                    "priority": 100,
                    "direction": "Inbound"
                }
            }
        ]
    }
}
{
    "apiVersion": "2017-10-01",
    "type": "Microsoft.Network/networkInterfaces",
    "name": "[parameters('vmNicName')]",
    "location": "[resourceGroup().location]",
    "dependsOn": [
        "[concat('Microsoft.Network/virtualNetworks/',parameters('virtualNetworkName'))]",
        "[concat('Microsoft.Network/applicationSecurityGroups/', parameters('asgName'))]"
    ],
    "properties": {
        "ipConfigurations": [
            {
                "name": "ipconfig1",
                "properties": {
                    "privateIPAllocationMethod": "Dynamic",
                    "subnet": {
                        "id": "[concat(resourceId('Microsoft.Network/virtualNetworks',parameters('virtualNetworkName')),'/subnets/',parameters('subnetName'))]"
                    },
                    "applicationSecurityGroups": [
                        {
                            "id": "[resourceId('Microsoft.Network/applicationSecurityGroups',parameters('asgName'))]"
                        }
                    ]
                }
            }
        ]
    }
}

That is all. If you need more information, have a look at the Azure documentation.

-> https://docs.microsoft.com/en-us/azure/virtual-network/security-overview#application-security-groups

Der Beitrag Deploying Application Security Groups with an Azure Resource Manager template erschien zuerst auf Daniel's Tech Blog.

Using ACS Engine to build private Kubernetes clusters with bring your own Virtual Network on Azure

$
0
0

Looking at Azure Container Service (AKS) – Managed Kubernetes you may have recognized that AKS currently does not support bring your own VNET and private Kubernetes masters. If you need both capabilities and one of them today, you must use ACS Engine to create the necessary Azure Resource Manager templates for the Kubernetes cluster deployment.

-> https://github.com/Azure/acs-engine

Beside that ACS Engine has advantages and disadvantages. Some of the ACS Engine advantages are RBAC, Managed Service Identity, private Kubernetes master, bring your own VNET and jump box deployment support. You even can choose between your favorite CNI plugin for network policies like the Azure CNI plugin, Calico or Cilium. If you want, you can specify none. But the default option is the Azure CNI plugin.

The main disadvantage of ACS Engine is that it creates non-managed Kubernetes cluster. You are responsible for nearly everything to keep the cluster operational.

So, you have a greater flexibility with ACS Engine, but you are responsible for more things compared to a managed solution.

After setting the context let us now start with the two scenarios I would like to talk about.

  1. Private Kubernetes cluster with bring your own VNET and jump box deployment
  2. Private Kubernetes cluster with bring your own VNET, custom Kubernetes service CIDR, custom Kubernetes DNS server IP address and jump box deployment

For each scenario, a VNET with address space 172.16.0.0/16 and one subnet with address space 172.16.0.0/20 will be the foundation to deploy the Kubernetes cluster in.

Starting with the first scenario and its config.json, have a look at the following lines.

{
  "apiVersion": "vlabs",
  "properties": {
    "orchestratorProfile": {
      "orchestratorType": "Kubernetes",
      "orchestratorRelease": "1.10",
      "kubernetesConfig": {
        "useManagedIdentity": true,
        "networkPolicy": "azure",
        "containerRuntime": "docker",
        "enableRbac": true,
        "maxPods":30,
        "useInstanceMetadata": true,
        "addons": [
          {
            "name": "tiller",
            "enabled": true
          },
          {
            "name": "kubernetes-dashboard",
            "enabled": true
          }
        ],
        "privateCluster": {
          "enabled": true,
          "jumpboxProfile": {
            "name": "azst-acse1-jb",
            "vmSize": "Standard_A2_v2",
            "osDiskSizeGB": 32,
            "storageProfile": "ManagedDisks",
            "username": "azureuser",
            "publicKey": "REDACTED"
          }
        }
      }
    },
    "masterProfile": {
      "count": 1,
      "dnsPrefix": "azst-acse1",
      "vmSize": "Standard_A2_v2",
      "osDiskSizeGB": 32,
      "distro": "ubuntu",
      "vnetSubnetId": "/subscriptions/REDACTED/resourceGroups/acs-engine/providers/Microsoft.Network/virtualNetworks/acs-engine/subnets/k8s",
      "firstConsecutiveStaticIP": "172.16.15.239",
      "vnetCIDR": "172.16.0.0/16"
    },
    "agentPoolProfiles": [
      {
        "name": "agentpool",
        "count": 3,
        "vmSize": "Standard_A2_v2",
        "osDiskSizeGB": 32,
        "distro": "ubuntu",
        "storageProfile": "ManagedDisks",
        "diskSizesGB": [
          32
        ],
        "availabilityProfile": "AvailabilitySet",
        "vnetSubnetId": "/subscriptions/REDACTED/resourceGroups/acs-engine/providers/Microsoft.Network/virtualNetworks/acs-engine/subnets/k8s"
      }
    ],
    "linuxProfile": {
      "adminUsername": "azureuser",
      "ssh": {
        "publicKeys": [
          {
            "keyData": "REDACTED"
          }
        ]
      }
    }
  }
}

-> https://github.com/neumanndaniel/kubernetes/blob/master/acs-engine/kubernetes_custom_vnet_private_master.json

If your network does not overlap with the Kubernetes service CIDR, you only need to specify the VNET subnet id for the master and agent nodes.

{
  "apiVersion": "vlabs",
  "properties": {
    "orchestratorProfile": {
    ...
    },
    "masterProfile": {
      ...
      "vnetSubnetId": "/subscriptions/REDACTED/resourceGroups/acs-engine/providers/Microsoft.Network/virtualNetworks/acs-engine/subnets/k8s",
      "firstConsecutiveStaticIP": "172.16.15.239",
      "vnetCIDR": "172.16.0.0/16"
    },
    "agentPoolProfiles": [
      {
        ...
        "vnetSubnetId": "/subscriptions/REDACTED/resourceGroups/acs-engine/providers/Microsoft.Network/virtualNetworks/acs-engine/subnets/k8s"
      }
    ],
    "linuxProfile": {
    ...
    }
  }
}

Additionally, you should set the first consecutive IP and VNET CIDR in the master node section. With the first consecutive IP you set the IP of the first master node. For details check the ACS Engine documentation. The VNET CIDR should be set to prevent source address NAT’ing in the VNET.

-> https://github.com/Azure/acs-engine/blob/master/docs/kubernetes/features.md#feat-custom-vnet

Our second case covers the necessary configuration steps, if your network overlaps with the Kubernetes service CIDR. So, you would like to change the Kubernetes service CIDR, Kubernetes DNS server IP address and pod CIDR. Have a look at the following lines.

{
  "apiVersion": "vlabs",
  "properties": {
    "orchestratorProfile": {
      "orchestratorType": "Kubernetes",
      "orchestratorRelease": "1.10",
      "kubernetesConfig": {
        "useManagedIdentity": true,
        "kubeletConfig": {
          "--non-masquerade-cidr": "172.16.0.0/16"
        },
        "clusterSubnet": "172.16.0.0/20",
        "dnsServiceIP": "172.16.16.10",
        "serviceCidr": "172.16.16.0/20",
        "networkPolicy": "azure",
        "containerRuntime": "docker",
        "enableRbac": true,
        "maxPods":30,
        "useInstanceMetadata": true,
        "addons": [
          {
            "name": "tiller",
            "enabled": true
          },
          {
            "name": "kubernetes-dashboard",
            "enabled": true
          }
        ],
        "privateCluster": {
          "enabled": true,
          "jumpboxProfile": {
            "name": "azst-acse1-jb",
            "vmSize": "Standard_A2_v2",
            "osDiskSizeGB": 32,
            "storageProfile": "ManagedDisks",
            "username": "azureuser",
            "publicKey": "REDACTED"
          }
        }
      }
    },
    "masterProfile": {
      "count": 1,
      "dnsPrefix": "azst-acse1",
      "vmSize": "Standard_A2_v2",
      "osDiskSizeGB": 32,
      "distro": "ubuntu",
      "vnetSubnetId": "/subscriptions/REDACTED/resourceGroups/acs-engine/providers/Microsoft.Network/virtualNetworks/acs-engine/subnets/k8s",
      "firstConsecutiveStaticIP": "172.16.15.239",
      "vnetCIDR": "172.16.0.0/16"
    },
    "agentPoolProfiles": [
      {
        "name": "agentpool",
        "count": 3,
        "vmSize": "Standard_A2_v2",
        "osDiskSizeGB": 32,
        "distro": "ubuntu",
        "storageProfile": "ManagedDisks",
        "availabilityProfile": "AvailabilitySet",
        "vnetSubnetId": "/subscriptions/REDACTED/resourceGroups/acs-engine/providers/Microsoft.Network/virtualNetworks/acs-engine/subnets/k8s"
      }
    ],
    "linuxProfile": {
      "adminUsername": "azureuser",
      "ssh": {
        "publicKeys": [
          {
            "keyData": "REDACTED"
          }
        ]
      }
    }
  }
}

-> https://github.com/neumanndaniel/kubernetes/blob/master/acs-engine/kubernetes_custom_network_config_private_master.json

In this case the service CIDR is part of the VNET CIDR to ensure no overlapping with existing networks. The DNS server IP address must be in the service CIDR space and the pod CIDR equals the VNET subnet address space, because we are using the Azure CNI plugin. So, the pods are receiving an IP address directly from the VNET subnet. If you are using another CNI plugin or none, then make sure to use an address space that is part of the VNET CIDR in this case. The final parameter is the – -non-masquerade-cidr that must be set with the VNET CIDR. Have a brief overview of all necessary settings.

{
  "apiVersion": "vlabs",
  "properties": {
    "orchestratorProfile": {
      ...
      "kubernetesConfig": {
        ...
        "kubeletConfig": {
          "--non-masquerade-cidr": "172.16.0.0/16" //VNET CIDR address space
        },
        "clusterSubnet": "172.16.0.0/20", //VNET subnet CIDR address space
        "dnsServiceIP": "172.16.16.10", //IP address in serviceCidr address space
        "serviceCidr": "172.16.16.0/20", //CIDR address space in VNET CIDR - no overlapping in VNET
        ...
      }
    },
    "masterProfile": {
      ...
      "vnetSubnetId": "/subscriptions/REDACTED/resourceGroups/acs-engine/providers/Microsoft.Network/virtualNetworks/acs-engine/subnets/k8s",
      "firstConsecutiveStaticIP": "172.16.15.239",
      "vnetCIDR": "172.16.0.0/16" //VNET CIDR address space
    },
    "agentPoolProfiles": [
      {
        ...
        "vnetSubnetId": "/subscriptions/REDACTED/resourceGroups/acs-engine/providers/Microsoft.Network/virtualNetworks/acs-engine/subnets/k8s"
      }
    ],
    "linuxProfile": {
    ...
    }
  }
}

The next step on our way to the private Kubernetes cluster with bring your own VNET is the generation of the Azure Resource Manager templates by using the ACS Engine. Assuming you have downloaded the necessary ACS Engine bits and placed the kubernetes.json config into the same folder.

./acs-engine.exe generate ./kubernetes.json

The command generates the ARM template and places them in the folder _output per default.

Now, start the Azure Cloud Shell (https://shell.azure.com) and jump into the _output folder. Per drag and drop upload the azuredeploy.json and azuredeploy.parameters.json to the Cloud Shell. Afterwards kick off the deployment using the Azure CLI.

az group create --name acs-engine --location westeurope
az network vnet create --name acs-engine --resource-group acs-engine --address-prefixes 172.16.0.0/16 --subnet-name K8s --subnet-prefix 172.16.0.0/20
az group deployment create --resource-group acs-engine --template-file ./azuredeploy.json --parameters ./azuredeploy.parameters.json --verbose

After the successful deployment we can connect to the jump box and check, if all settings were taken effectively.

acsengineprivatek8s01acsengineprivatek8s02acsengineprivatek8s03

azcdmdn@azst-acse1-jb:~$ kubectl get services --all-namespaces -o wide
NAMESPACE     NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)         AGE       SELECTOR
default       azure-vote-back        ClusterIP      172.16.29.115   <none>          6379/TCP        13d       app=azure-vote-back
default       azure-vote-front       LoadBalancer   172.16.30.243   51.144.43.142   80:32627/TCP    13d       app=azure-vote-front
default       kubernetes             ClusterIP      172.16.16.1     <none>          443/TCP         13d       <none>
kube-system   heapster               ClusterIP      172.16.18.168   <none>          80/TCP          13d       k8s-app=heapster
kube-system   kube-dns               ClusterIP      172.16.16.10    <none>          53/UDP,53/TCP   13d       k8s-app=kube-dns
kube-system   kubernetes-dashboard   NodePort       172.16.28.177   <none>          80:32634/TCP    13d       k8s-app=kubernetes-dashboard
kube-system   metrics-server         ClusterIP      172.16.26.104   <none>          443/TCP         13d       k8s-app=metrics-server
kube-system   tiller-deploy          ClusterIP      172.16.29.150   <none>          44134/TCP       13d       app=helm,name=tiller
azcdmdn@azst-acse1-jb:~$ kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                            READY     STATUS    RESTARTS   AGE       IP              NODE
default       azure-vote-back-68d6c68dcc-xcfgk                1/1       Running   0          2h        172.16.0.46     k8s-agentpool-35404701-2
default       azure-vote-front-7976b7dcd9-mt2hh               1/1       Running   0          2h        172.16.0.51     k8s-agentpool-35404701-2
default       kured-2hrwh                                     1/1       Running   5          7d        172.16.0.22     k8s-agentpool-35404701-0
default       kured-5sgdn                                     1/1       Running   4          7d        172.16.0.45     k8s-agentpool-35404701-2
default       kured-qgbsx                                     1/1       Running   4          7d        172.16.0.99     k8s-agentpool-35404701-1
default       omsagent-2d2kf                                  1/1       Running   10         8d        172.16.0.41     k8s-agentpool-35404701-2
default       omsagent-ql9xv                                  1/1       Running   18         13d       172.16.0.84     k8s-master-35404701-0
default       omsagent-sc2xm                                  1/1       Running   10         8d        172.16.0.110    k8s-agentpool-35404701-1
default       omsagent-szj6j                                  1/1       Running   9          8d        172.16.0.16     k8s-agentpool-35404701-0
default       vsts-agent-qg5rz                                1/1       Running   1          2h        172.16.0.52     k8s-agentpool-35404701-2
kube-system   heapster-568476f785-c46r9                       2/2       Running   0          2h        172.16.0.39     k8s-agentpool-35404701-2
kube-system   kube-addon-manager-k8s-master-35404701-0        1/1       Running   6          13d       172.16.15.239   k8s-master-35404701-0
kube-system   kube-apiserver-k8s-master-35404701-0            1/1       Running   9          13d       172.16.15.239   k8s-master-35404701-0
kube-system   kube-controller-manager-k8s-master-35404701-0   1/1       Running   6          13d       172.16.15.239   k8s-master-35404701-0
kube-system   kube-dns-v20-59b4f7dc55-wtv6h                   3/3       Running   0          2h        172.16.0.44     k8s-agentpool-35404701-2
kube-system   kube-dns-v20-59b4f7dc55-xxgdd                   3/3       Running   0          2h        172.16.0.48     k8s-agentpool-35404701-2
kube-system   kube-proxy-hf467                                1/1       Running   6          13d       172.16.15.239   k8s-master-35404701-0
kube-system   kube-proxy-n8sj6                                1/1       Running   7          8d        172.16.0.5      k8s-agentpool-35404701-0
kube-system   kube-proxy-nb4gx                                1/1       Running   6          8d        172.16.0.36     k8s-agentpool-35404701-2
kube-system   kube-proxy-r8wdz                                1/1       Running   6          8d        172.16.0.97     k8s-agentpool-35404701-1
kube-system   kube-scheduler-k8s-master-35404701-0            1/1       Running   6          13d       172.16.15.239   k8s-master-35404701-0
kube-system   kubernetes-dashboard-64dcf5784f-gxtqv           1/1       Running   0          2h        172.16.0.109    k8s-agentpool-35404701-1
kube-system   metrics-server-7fcdc5dbb9-vs26l                 1/1       Running   0          2h        172.16.0.55     k8s-agentpool-35404701-2
kube-system   tiller-deploy-d85ccb55c-6nncg                   1/1       Running   0          2h        172.16.0.57     k8s-agentpool-35404701-2
azcdmdn@azst-acse1-jb:~$ kubectl get nodes --all-namespaces -o wide
NAME                       STATUS    ROLES     AGE       VERSION   EXTERNAL-IP   OS-IMAGE                       KERNEL-VERSION      CONTAINER-RUNTIME
k8s-agentpool-35404701-0   Ready     agent     13d       v1.10.0   <none>        Debian GNU/Linux 9 (stretch)   4.13.0-1014-azure   docker://1.13.1
k8s-agentpool-35404701-1   Ready     agent     13d       v1.10.0   <none>        Debian GNU/Linux 9 (stretch)   4.13.0-1014-azure   docker://1.13.1
k8s-agentpool-35404701-2   Ready     agent     13d       v1.10.0   <none>        Debian GNU/Linux 9 (stretch)   4.13.0-1014-azure   docker://1.13.1
k8s-master-35404701-0      Ready     master    13d       v1.10.0   <none>        Debian GNU/Linux 9 (stretch)   4.13.0-1014-azure   docker://1.13.1

The private Kubernetes cluster with bring your own VNET and custom network configuration is now fully operational and ready for some container deployments.

Last but not least I highly recommend that you secure your jump box with the Azure Security Center just in time capability.

-> https://docs.microsoft.com/en-us/azure/security-center/security-center-just-in-time

Der Beitrag Using ACS Engine to build private Kubernetes clusters with bring your own Virtual Network on Azure erschien zuerst auf Daniel's Tech Blog.

Using custom DNS server for domain specific name resolution with Azure Kubernetes Service

$
0
0

Just a short blog post about a small challenge I had these days. If you want to specify a custom DNS server for domain specific name resolution with AKS, you can do so.

The necessary steps are already described in the Kubernetes documentation.

-> https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/

Define a config map and apply it to your AKS cluster in Azure. The following one is an example on how to do it.

First deploy a pod in your AKS cluster for name resolution testing. I have used the following example of a busybox container.

apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: busybox
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always

My scenario was, that I wanted to have the name resolution for the following test domain azure.local. So, AKS cannot resolve it by default, because it is not a standard TLD that is known by the DNS system.

customDNS01

Moving on with the config map definition as described in the Kubernetes documentation, I am providing the AKS cluster with the necessary information on how to contact the custom DNS server for this specific domain. The DNS server sits in another VNET in Azure which is connected via VNET peering with the AKS VNET.

apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-dns
  namespace: kube-system
data:
  stubDomains: |
    {"azurestack.local": ["172.16.0.4"]}

After seconds I am now able to resolve the domain azure.local and its records.

customDNS02

When to use it? Especially in hybrid cloud use cases, where you need to be able to resolve names in AKS to reach on-premises resources.

Der Beitrag Using custom DNS server for domain specific name resolution with Azure Kubernetes Service erschien zuerst auf Daniel's Tech Blog.


Configuring Azure Kubernetes Service via the Terraform OSS Azure Resource Provider to use a custom DNS server for domain specific name resolution

$
0
0

I have already written about on how to use a custom DNS server for domain specific name resolution with AKS a couple of weeks ago.

-> https://www.danielstechblog.io/using-custom-dns-server-for-domain-specific-name-resolution-with-azure-kubernetes-service/

Today I am writing about how you can leverage the newly announced Terraform OSS Azure Resource Provider for the same configuration with your existing Azure Resource Manager template know-how. The Terraform OSS RP is currently in private preview and if you would like to try it out you can sign up for the private preview.

-> https://aka.ms/tfossrp

During the private preview only the three Terraform providers for Kubernetes, Cloudflare and Datadog are supported. We will focus on the Kubernetes one.

Having a look at the following ARM template you will see two different resource types Microsoft.TerraformOSS/providerregistrations and Microsoft.TerraformOSS/resources.

{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "clusterName": {
            "type": "string",
            "metadata": {
                "description": "The name of the AKS cluster."
            }
        },
        "aksResourceGroup": {
            "type": "string",
            "metadata": {
                "description": "AKS cluster resource group name"
            }
        },
        "terraformResourceName": {
            "type": "string",
            "metadata": {
                "description": "The name of the Terraform deployment."
            }
        },
        "terraformResourceType": {
            "type": "string",
            "defaultValue": "kubernetes_config_map",
            "allowedValues": [
                "kubernetes_config_map",
                "kubernetes_horizontal_pod_autoscaler",
                "kubernetes_limit_range",
                "kubernetes_namespace",
                "kubernetes_persistent_volume",
                "kubernetes_persistent_volume_claim",
                "kubernetes_pod",
                "kubernetes_replication_controller",
                "kubernetes_resource_quota",
                "kubernetes_secret",
                "kubernetes_service",
                "kubernetes_service_account",
                "kubernetes_storage_class"
            ],
            "metadata": {
                "description": "The name of the Terraform resource type."
            }
        },
        "terraformResourceProviderLocation": {
            "type": "string",
            "defaultValue": "westcentralus",
            "allowedValues": [
                "westcentralus"
            ],
            "metadata": {
                "description": "Terraform resource provider location."
            }
        },
        "dnsZone": {
            "type": "string",
            "metadata": {
                "description": "The name of the DNS zone."
            }
        },
        "dnsServerIp": {
            "type": "string",
            "metadata": {
                "description": "The DNS server ip address."
            }
        }
    },
    "variables": {
        "apiVersion": {
            "aks": "2018-03-31",
            "terraform": "2018-05-01-preview"
        },
        "deploymentConfiguration": {
            "clusterName": "[parameters('clusterName')]",
            "aksResourceGroup": "[parameters('aksResourceGroup')]",
            "terraformLocation": "[parameters('terraformResourceProviderLocation')]",
            "terraformResourceName": "[parameters('terraformResourceName')]",
            "terraformResourceType": "[parameters('terraformResourceType')]",
            "dnsZone": "[parameters('dnsZone')]",
            "dnsServerIp": "[parameters('dnsServerIp')]"
        }
    },
    "resources": [
        {
            "apiVersion": "[variables('apiVersion').terraform]",
            "type": "Microsoft.TerraformOSS/providerregistrations",
            "name": "[variables('deploymentConfiguration').clusterName]",
            "location": "[variables('deploymentConfiguration').terraformLocation]",
            "properties": {
                "providertype": "kubernetes",
                "settings": {
                    "inline_config": "[Base64ToString(ListCredential(resourceId(subscription().subscriptionId,variables('deploymentConfiguration').aksResourceGroup,'Microsoft.ContainerService/managedClusters/accessProfiles',variables('deploymentConfiguration').clusterName,'clusterAdmin'),variables('apiVersion').aks).properties.kubeConfig)]"
                }
            }
        },
        {
            "apiVersion": "[variables('apiVersion').terraform]",
            "type": "Microsoft.TerraformOSS/resources",
            "name": "[variables('deploymentConfiguration').terraformResourceName]",
            "location": "[variables('deploymentConfiguration').terraformLocation]",
            "dependsOn": [
                "[concat('Microsoft.TerraformOSS/providerregistrations/',variables('deploymentConfiguration').clusterName)]"
            ],
            "properties": {
                "providerId": "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.TerraformOSS/providerregistrations/',variables('deploymentConfiguration').clusterName)]",
                "resourcetype": "[variables('deploymentConfiguration').terraformResourceType]",
                "settings": {
                    "metadata": [
                        {
                            "name": "kube-dns",
                            "namespace": "kube-system"
                        }
                    ],
                    "data": [
                        {
                            "stubDomains": "[concat('{\"',variables('deploymentConfiguration').dnsZone,'\": [\"',variables('deploymentConfiguration').dnsServerIp,'\"]}\n')]"
                        }
                    ]
                }
            }
        }
    ],
    "outputs": {}
}

The providerregistrations type is used to get the connection and authentication information that will be used by the resources type to deploy the described configuration to your AKS cluster. More details about that can be found in the announcement blog post by Simon Davies.

-> https://azure.microsoft.com/en-us/blog/introducing-the-azure-terraform-resource-provider/

So, when you would use such an ARM template? First you can use it during an AKS cluster deployment to deploy Kubernetes resources to the cluster or if you have an existing one you can use ARM templates instead of YAML configuration files to deploy Kubernetes resources to the cluster itself.

The template I provided above can be used for both scenarios. For the first one you need a nested template configuration which deploys the AKS cluster first and then applies the Kubernetes configuration. The second one will be described now.

You can find the template in my GitHub repository.

-> https://github.com/neumanndaniel/armtemplates/blob/master/terraform/aksCustomDns.json

As in my previous blog post described I am deploying a busybox pod for the name resolution tests.

apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: busybox
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always

Yes, I could have used the Terraform OSS RP for it, but decided to go with a YAML configuration file instead to focus on the ARM template for the Kubernetes config map configuration.

My scenario was, that I wanted to have the name resolution for the following test domain azure.local. So, AKS cannot resolve it by default, because it is not a standard TLD that is known by the DNS system.

terraformossrp01

Moving on with the config map definition as described in the Kubernetes documentation, I am providing the AKS cluster with the necessary information on how to contact the custom DNS server for this specific domain. The DNS server sits in another VNET in Azure which is connected via VNET peering with the AKS VNET.

The config map definition then gets deployed via the ARM template through an Azure Cloud Shell session and the following Azure CLI command.

az group deployment create --resource-group terraform --template-file ./aksCustomDns.json --parameters clusterName=azst-aks1 aksResourceGroup=aks terraformResourceName=customDnsZone terraformResourceType=kubernetes_config_map terraformResourceProviderLocation=westcentralus dnsZone=azure.local dnsServerIp=172.16.0.4

For the template deployment we provide the AKS cluster name, the resource group the AKS cluster is sitting in, the Terraform resource type for a Kubernetes config map, the Terraform RP location, the name of the DNS zone and the DNS server IP address as parameters. The Terraform RP is only available in the Azure region West Central US right now.

terraformossrp02terraformossrp04

As you can see above the config map definition was successfully deployed to the AKS cluster and we have two resources in Azure for the two different Terraform resource types.

After seconds we now able to resolve the domain azure.local and its records.

terraformossrp03

In my opinion the Terraform OSS RP is a perfect addition to the Azure Resource Manager template capabilities we have today. If you are very comfortable with ARM templates, the Terraform OSS RP gives you the additional tooling to deploy Kubernetes resources onto AKS clusters instead of using YAML configuration files which need to be deployed via kubectl for example. In the end you have the freedom of choice what you would like to use and that is great in my opinion.

If you missed it in the beginning and like to try out the Terraform OSS RP, then sign up for the private preview under the following link.

-> https://aka.ms/tfossrp

Der Beitrag Configuring Azure Kubernetes Service via the Terraform OSS Azure Resource Provider to use a custom DNS server for domain specific name resolution erschien zuerst auf Daniel's Tech Blog.

Build Azure Kubernetes Service cluster with bring your own Virtual Network on Azure

$
0
0

At Build this year Microsoft announced the Custom VNET with Azure CNI integration for Azure Kubernetes Service.

-> https://azure.microsoft.com/en-us/blog/kubernetes-on-azure/

Even this was some month ago I would like to walk you through the necessary planning and deployment steps for the bring your own Virtual Network option here.

Before we start a deployment, we must ensure to meet the network prerequisites for the custom VNET scenario.

  • The VNET for the AKS cluster must allow outbound internet connectivity.
  • Do not create more than one AKS cluster in the same subnet.
  • Advanced networking for AKS does not support VNETs that use Azure Private DNS Zones.
  • AKS clusters may not use 169.254.0.0/16, 172.30.0.0/16, or 172.31.0.0/16 for the Kubernetes service address range.
  • The service principal used for the AKS cluster must have Contributor permissions to the resource group containing the existing VNET.

The next step is to calculate the necessary IP address space that is needed for the AKS cluster deployment. This depends heavily on the number of nodes in the AKS cluster and the number of pods per node. An AKS cluster can have a maximum of 100 nodes and 110 pods maximum per each node. Per default the maximum pod limit in the advanced networking scenario is 30 per each node. You can adjust the limit during the initial deployment of the AKS cluster itself. For the calculation details we will have a look at the following part of the Azure AKS documentation.

-> https://docs.microsoft.com/en-us/azure/aks/networking-overview#plan-ip-addressing-for-your-cluster

I created a custom Virtual Network with a 10.0.0.0/8 address space including a subnet with a 10.240.0.0/16 address space for the AKS cluster deployment. The Kubernetes service CIDR address space is defined as 10.0.0.0/16 with the Kubernetes DNS service IP address 10.0.0.10. You must ensure in a bring you own Virtual Network deployment that the Kubernetes service CIDR is not used by any other network in Azure or on-premises the Virtual Network will gets connect to. Otherwise you may experience network / routing issues.

You have three options to deploy an AKS cluster into a custom VNET. The Azure portal, Azure CLI or an Azure Resource Manager template.

The Azure portal experience is very straight forward. In the Networking section select Advanced for the network configuration and select the VNET as also the corresponding subnet. Finally, the Kubernetes service address range, the Kubernetes DNS service IP address and the Docker Bridge address needs to be defined. That’s all, but it does not support to set the pod limit.

akscustomvnet01

The Azure CLI is similar instead of the VNET and subnet selection. In the CLI command you reference to the Virtual Network subnet via the resource id and you can specify the pod limit.

az aks create --name aks-cluster --resource-group aks --network-plugin azure --max-pods 30 --service-cidr 10.0.0.0/16 --dns-service-ip 10.0.0.10 --docker-bridge-address 172.17.0.1/16 --vnet-subnet-id /subscriptions/{SUBSCRIPTION ID}/resourceGroups/{RESOURCE GROUP NAME}/providers/Microsoft.Network/virtualNetworks/{VIRTUAL NETWORK NAME}/subnets/{SUBNET NAME}

If you need an ARM template to get started, have a look at the following one.

-> https://github.com/neumanndaniel/armtemplates/blob/master/container/aks.json

An ARM template deployment can be kicked off through the Azure CLI or Azure PowerShell. Here is the Azure CLI example.

az group deployment create --resource-group aks --template-uri https://raw.githubusercontent.com/neumanndaniel/armtemplates/master/container/aks.json --parameters ./aks.parameters.json --verbose

Beside the ARM template you will need an ARM template parameter file. The parameter file includes all your parameters values like AKS cluster name and the VNET configuration.

After the deployment we will see that the AKS cluster was successfully deployed into the custom VNET.

akscustomvnet02akscustomvnet03

Now, we can connect the AKS custom VNET through Virtual Network peering with the rest of our Azure and on-premises infrastructure.

-> https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-peering-overview

Der Beitrag Build Azure Kubernetes Service cluster with bring your own Virtual Network on Azure erschien zuerst auf Daniel's Tech Blog.

Kubernetes network policies on Azure Kubernetes Service with Azure NPM

$
0
0

Microsoft provides an own network policy module to implement Kubernetes network policies with the Azure CNI plugin for acs-engine and AKS called Azure NPM.

-> https://github.com/Azure/azure-container-networking/tree/master/npm

The Azure NPM is available since quite some time for acs-engine and natively integrated, but not yet for AKS. If you want to use Azure NPM on Azure Kubernetes Service, you can do so. Just run the following command to deploy the Azure NPM daemonset on your AKS Cluster.

-> https://github.com/Azure/acs-engine/blob/master/parts/k8s/addons/kubernetesmasteraddons-azure-npm-daemonset.yaml

kubectl apply -f https://raw.githubusercontent.com/Azure/acs-engine/master/parts/k8s/addons/kubernetesmasteraddons-azure-npm-daemonset.yaml

Afterwards run

kubectl get pods -n kube-system --selector=k8s-app=azure-npm -o wide

to check the successful deployment.

The output should look like this.

NAME              READY   STATUS    RESTARTS   AGE   IP             NODE                       NOMINATED NODE
azure-npm-5f6mb   1/1     Running   0          1d    10.240.0.4     aks-agentpool-14987876-1   <none>
azure-npm-lf5w7   1/1     Running   0          1d    10.240.0.115   aks-agentpool-14987876-0   <none>

The Azure NPM works following the LIFO principle to establish fine grained network isolation between your pods.

-> https://github.com/Azure/azure-container-networking/pull/258

So, you can ensure that only specific pods forming the front-end can talk to a certain backend. Following the LIFO principle, you deploy a deny all inbound network policy to a certain namespace first to restrict all ingress traffic to all pods in this namespace.

Warning: Do not deploy a deny all inbound or outbound network policy to the kube-system namespace. This causes a serious operational impact on your Kubernetes cluster.

Then you deploy additional network policies to allow the traffic to your container applications as necessary but maintaining the network security/isolation between different container applications in the namespace. Furthermore, you establish with the deny all inbound network policy a first defense line that new container applications are not directly exposed and available for network traffic.

Let us dive into one example. I am running two simple web applications on my AKS cluster. One provides a static web page (aks.trafficmanager.net) and the other one (src.trafficmanager.net) shows you specific information about the web request targeting the web application.

aksazurenpm08

The sketch above shows how the web applications are published to the outside world.

aksazurenpm01aksazurenpm02

Before starting the lock down of the default namespace, we deploy a busybox container first.

apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
  labels:
    app: busybox
spec:
  containers:
  - name: busybox
    image: busybox
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
kubectl apply -f https://raw.githubusercontent.com/neumanndaniel/kubernetes/master/azure-npm/busybox.yaml

-> https://github.com/neumanndaniel/kubernetes/blob/master/azure-npm/busybox.yaml

We need the busybox container later in the example.

Next, we roll out the deny all inbound network policy to the default namespace.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all-inbound
  namespace: default
spec:
  podSelector: {}
  policyTypes:
  - Ingress
kubectl apply -f https://raw.githubusercontent.com/neumanndaniel/kubernetes/master/azure-npm/deny-all-inbound.yaml

-> https://github.com/neumanndaniel/kubernetes/blob/master/azure-npm/deny-all-inbound.yaml

When calling the web applications, they are not accessible anymore.

aksazurenpm03

So, we allow the traffic to the aks.trafficmanager.net web application from the Internet again with the following network policy.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-go-webapp
  namespace: default
spec:
  podSelector:
    matchLabels:
      app: go-webapp
  policyTypes:
  - Ingress
  ingress:
  - ports:
    - port: 8080
    from: []
kubectl apply -f https://raw.githubusercontent.com/neumanndaniel/kubernetes/master/azure-npm/allow-go-webapp.yaml

-> https://github.com/neumanndaniel/kubernetes/blob/master/azure-npm/allow-go-webapp.yaml

As you can see in the template, Kubernetes network policies are label-based and with that totally dynamic and not dependent on the frequently changing pod IP addresses.

aksazurenpm04aksazurenpm05

The aks.trafficmanager.net web application is now accessible again, but not the src.trafficmanager.net. This one should only be accessible from the busybox container and not from the Internet or any other pods running on the AKS cluster.

In the next network policy template, you will see that the label-based selection also works in the from section of the network policy definition.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-src-ip
  namespace: default
spec:
  podSelector:
    matchLabels:
      app: src-ip-internal
  policyTypes:
  - Ingress
  ingress:
  - ports:
    - port: 8080
    from:
      - podSelector:
          matchLabels:
            app: busybox
kubectl apply -f https://raw.githubusercontent.com/neumanndaniel/kubernetes/master/azure-npm/allow-src-ip.yaml

-> https://github.com/neumanndaniel/kubernetes/blob/master/azure-npm/allow-src-ip.yaml

So, the network policy is applied to all pods with the label matching app=src-ip-internal and only allows ingress traffic from pods matching the label app=busybox. Let us connect to the busybox container to test if it works.

kubectl exec -it busybox /bin/sh

In the container itself we run the following set of commands to test the connectivity.

wget --spider -S src.trafficmanager.net
wget --spider -S src-ip-internal
wget --spider -S 10.240.255.252
wget src-ip-internal
cat index.html

Have a look at the following screenshots showing the output of the previous commands.

aksazurenpm06aksazurenpm07

Calling src.trafficmanager.net does not work as expected, because it is the external address. Trying the Kubernetes service address src-ip-internal or the IP address of the Azure Internal Load Balancer works. So, the src.trafficmanager.net web application is only accessible from the busybox container inside the AKS cluster.

I hope you got an idea how network policies can be a beneficial security building block in your AKS cluster to better control the traffic flow to or from your container applications.

A good collection of Kubernetes network policy examples and use cases can be found under the following GitHub repository.

-> https://github.com/ahmetb/kubernetes-network-policy-recipes

Der Beitrag Kubernetes network policies on Azure Kubernetes Service with Azure NPM erschien zuerst auf Daniel's Tech Blog.

Running Ambassador API gateway on Azure Kubernetes Service

$
0
0

Lately I was playing around with the Ambassador Kubernetes-native microservices API gateway as an ingress controller on Azure Kubernetes Service.

-> https://www.getambassador.io/

Ambassador is based on the popular L7 proxy Envoy by Lyft. Beside the API gateway capabilities, you can use Ambassador just as an ingress controller for publishing your container applications to the outside world.

-> https://www.getambassador.io/features/

The difference to other ingress controller or proxy implementations on Kubernetes is that Ambassador does not rely on the ingress object of Kubernetes. You configure Ambassador through annotations in the Kubernetes service object of your container application.

...
  annotations:
    getambassador.io/config: |
      ---
        apiVersion: ambassador/v1
        kind:  Mapping
        name:  src-ip
        prefix: /
        host: src.trafficmanager.net
        service: src-ip
...

Before we dig deeper into the configuration let us have a look at the deployment of Ambassador on an AKS cluster. On the Ambassador website you can find two getting started guides. One leveraging YAML templates and the other one a helm chart.

-> https://www.getambassador.io/user-guide/getting-started/
-> https://www.getambassador.io/user-guide/helm/

In my tests I used the YAML template for the Ambassador deployment and downloaded it on my local workstation as recommended.

-> https://getambassador.io/yaml/ambassador/ambassador-rbac.yaml

We will adjust the template before deploying Ambassador on an AKS cluster and coming back to this later.

The reason for the adjustments is the Ambassador service definition that sets the externalTrafficPolicy to Local instead of using the Kubernetes default Cluster. This preserves the client IP addresses and prevents an additional hop you can expect with externalTrafficPolicy set to Cluster.

If you want to know more about Kubernetes external traffic policies in detail, have a look at the following blog post.

-> https://www.asykim.com/blog/deep-dive-into-kubernetes-external-traffic-policies
-> https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip

When using externalTrafficPolicy set to Local we should specify a pod anti-affinity rule in the Ambassador template to ensure equal traffic distribution across all Ambassador pods.

...
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: service
                operator: In
                values:
                - ambassador
            topologyKey: kubernetes.io/hostname
...

The pod anti-affinity rule requiredDuringSchedulingIgnoredDuringExecution applies during the scheduling of the Ambassador pods and forces Kubernetes to deploy the pods on different agent nodes.

A more moderate pod anti-affinity rule is preferredDuringSchedulingIgnoredDuringExecution.

...
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: service
                    operator: In
                    values:
                    - ambassador
            topologyKey: kubernetes.io/hostname
...

This rule tells Kubernetes to deploy the pods on different agent nodes if possible, but not prevents in certain circumstances that two or more Ambassador pods run on the same agent node.

You can use the following templates to roll out Ambassador on an AKS cluster.

-> https://github.com/neumanndaniel/kubernetes/blob/master/ambassador/ambassador-rbac.yaml
-> https://github.com/neumanndaniel/kubernetes/blob/master/ambassador/ambassador-rbac-soft.yaml
-> https://github.com/neumanndaniel/kubernetes/blob/master/ambassador/ambassador-svc.yaml

Just run the following kubectl commands.

kubectl apply -f https://raw.githubusercontent.com/neumanndaniel/kubernetes/master/ambassador/ambassador-rbac.yaml
kubectl apply -f https://raw.githubusercontent.com/neumanndaniel/kubernetes/master/ambassador/ambassador-svc.yaml

When the deployment was successful you should see a similar output on your AKS cluster.

[] > kubectl get pods -o wide -l service=ambassador
NAME                          READY   STATUS    RESTARTS   AGE     IP             NODE                                NOMINATED NODE
ambassador-7df789769b-hbltf   1/1     Running   0          2m26s   10.240.0.204   aks-nodepool1-14987876-vmss000001   <none>
ambassador-7df789769b-lx5hf   1/1     Running   4          6m3s    10.240.1.14    aks-nodepool1-14987876-vmss000002   <none>
ambassador-7df789769b-mx65k   1/1     Running   1          6m2s    10.240.1.138   aks-nodepool1-14987876-vmss000003   <none>
[] > kubectl get svc -o wide -l service=ambassador
NAME         TYPE           CLUSTER-IP   EXTERNAL-IP    PORT(S)        AGE     SELECTOR
ambassador   LoadBalancer   10.0.44.22   51.136.55.44   80:30314/TCP   5d15h   service=ambassador
[] > kubectl get svc -o wide -l service=ambassador-admin
NAME               TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE     SELECTOR
ambassador-admin   NodePort   10.0.173.115   <none>        8877:32422/TCP   5d21h   service=ambassador

As an example container application I am using the echoserver.

-> https://console.cloud.google.com/gcr/images/google-containers/GLOBAL/echoserver?gcrImageListsize=30

The echoserver responds with information about the HTTP request.

ambassador01

Let us have a look at the service object definition in the template file.

-> https://github.com/neumanndaniel/kubernetes/blob/master/ambassador/src-ip-ambassador.yaml

apiVersion: v1
kind: Service
metadata:
  name: src-ip
  labels:
    app: src-ip
  annotations:
    getambassador.io/config: |
      ---
        apiVersion: ambassador/v1
        kind:  Mapping
        name:  src-ip
        prefix: /
        host: src.trafficmanager.net
        service: src-ip
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: src-ip

The first line of the Ambassador annotation defines the API version followed by the object definition. In this case it is a Mapping object for our redirect. Then we define the name of the Mapping object before we configure the redirect. As I am using an Azure Traffic Manager for my container application exclusively, I am specifying / as my prefix. The host contains the Traffic Manager URL. Last thing to do in the configuration is to specify the Kubernetes service that receives the traffic.

It is a simple redirect configuration and more configuration examples can be found on the Ambassador website if you want to test more advanced scenarios.

-> https://www.getambassador.io/reference/configuration

Der Beitrag Running Ambassador API gateway on Azure Kubernetes Service erschien zuerst auf Daniel's Tech Blog.

Publishing Azure Functions on AKS through the Ambassador API gateway

$
0
0

In my last blog post I introduced you to the Ambassador Kubernetes-native microservices API gateway as an ingress controller running on Azure Kubernetes Service.

-> https://www.danielstechblog.io/running-ambassador-api-gateway-on-azure-kubernetes-service/

Today I would like to show you how to publish an Azure Function running on Kubernetes through the Ambassador API gateway. It is nothing special compared to a web application or anything else providing a HTTP REST API. But you must pay attention to one setting of the Ambassador configuration.

As you may know serverless function having a so-called cold start time.

-> https://azure.microsoft.com/en-us/blog/understanding-serverless-cold-start/

That said it needs a couple of seconds for the function to start up and processing the request after a long idle time.

Per default Ambassador has a timeout of 3 seconds and its very likely that you receive the upstream request timeout error message by Ambassador in the mentioned scenario.

> curl -X POST http://helloworld.trafficmanager.net/api/helloWorld -H "Content-Type: application/json" --data-raw '{"input": "Azure Functions in a Container on AKS"}'
upstream request timeout

So, you would like to prevent this user experience, when running Azure Functions on Kubernetes and published through Ambassador. All what needs to be done is to add the timeout_ms setting to the service object. The timeout is specified in milliseconds.

apiVersion: v1
kind: Service
metadata:
  name: helloworld-function-figlet
  labels:
    app: function-figlet
  annotations:
    getambassador.io/config: |
      ---
        apiVersion: ambassador/v1
        kind: Mapping
        name: helloworld-function-figlet
        prefix: /
        host: helloworld.trafficmanager.net
        service: helloworld-function-figlet
        timeout_ms: 20000
spec:
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: function-figlet

I recommend setting the timeout to 20 seconds in such a scenario.

> curl -X POST http://helloworld.trafficmanager.net/api/helloWorld -H "Content-Type: application/json" --data-raw '{"input": "Azure Functions in a Container on AKS"}'
    _                          _____                 _   _
   / \    _____   _ _ __ ___  |  ___|   _ _ __   ___| |_(_) ___  _ __  ___
  / _ \  |_  / | | | '__/ _ \ | |_ | | | | '_ \ / __| __| |/ _ \| '_ \/ __|
 / ___ \  / /| |_| | | |  __/ |  _|| |_| | | | | (__| |_| | (_) | | | \__ \
/_/   \_\/___|\__,_|_|  \___| |_|   \__,_|_| |_|\___|\__|_|\___/|_| |_|___/
 _                  ____            _        _
(_)_ __     __ _   / ___|___  _ __ | |_ __ _(_)_ __   ___ _ __    ___  _ __
| | '_ \   / _` | | |   / _ \| '_ \| __/ _` | | '_ \ / _ \ '__|  / _ \| '_ \
| | | | | | (_| | | |__| (_) | | | | || (_| | | | | |  __/ |    | (_) | | | |
|_|_| |_|  \__,_|  \____\___/|_| |_|\__\__,_|_|_| |_|\___|_|     \___/|_| |_|
    _    _  ______
   / \  | |/ / ___|
  / _ \ | ' /\___ \
 / ___ \| . \ ___) |
/_/   \_\_|\_\____/

Der Beitrag Publishing Azure Functions on AKS through the Ambassador API gateway erschien zuerst auf Daniel's Tech Blog.

Viewing all 47 articles
Browse latest View live