<?xml version="1.0" encoding="ISO-8859-1"?>
<rss version="2.0">
<channel>
<title>Harvesting Clouds</title>
<link>https://HarvestingClouds.com</link>
<description>Blog about all things regarding private and public clouds</description>
<language>en-us</language>
<item>
<title>Windows Admin Center in the Azure portal - Index</title>
<description><![CDATA[<p>This is the <strong>Index</strong> for the series of blog posts regarding the <strong>Windows Admin Center in the Azure portal </strong>.</p>
<p><strong>Note</strong> that this Index is updated regularly as more posts are added around this topic.</p>
<ol>
<li><a href="https://harvestingclouds.com/post/windows-admin-center-in-the-azure-portal-1-intro-to-the-easiest-server-management/" target="_blank">Intro to the Easiest Server Management</a></li>
<li><a href="https://harvestingclouds.com/post/windows-admin-center-in-the-azure-portal-2-setting-it-up/" target="_blank">Setting it up</a></li>
<li><a href="https://harvestingclouds.com/post/windows-admin-center-in-the-azure-portal-3-under-the-hood/" target="_blank">Under the hood</a></li>
<li><a href="https://harvestingclouds.com/post/windows-admin-center-in-the-azure-portal-4-connecting-to-it/" target="_blank">Connecting to it</a></li>
<li><a href="https://harvestingclouds.com/post/windows-admin-center-in-the-azure-portal-5-working-with-it/" target="_blank">Working with it</a></li>
<li><a href="https://harvestingclouds.com/post/windows-admin-center-in-the-azure-portal-6-various-tools-part-1/" target="_blank">Various Tools - Part 1</a></li>
<li><a href="https://harvestingclouds.com/post/windows-admin-center-in-the-azure-portal-7-various-tools-part-2/" target="_blank">Various Tools - Part 2</a></li>
<li><a href="https://harvestingclouds.com/post/windows-admin-center-in-the-azure-portal-8-caveats/" target="_blank">Caveats</a></li>
<li><a href="https://harvestingclouds.com/post/windows-admin-center-in-the-azure-portal-9-securing-inbound-connectivity/" target="_blank">Securing inbound connectivity</a></li>
<li><a href="https://harvestingclouds.com/post/windows-admin-center-in-the-azure-portal-10-troubleshooting-common-issues/" target="_blank">Troubleshooting common issues</a></li>
<li><a href="https://harvestingclouds.com/post/windows-admin-center-in-the-azure-portal-11-automating-deployment/" target="_blank">Automating deployment</a></li>
</ol>]]></description>
<link>https://HarvestingClouds.com/post/windows-admin-center-in-the-azure-portal-index</link>
<pubDate>Mon, 28 Mar 2022 00:00:00 +0500</pubDate>
</item>
<item>
<title>An Easter Egg or hidden feature within the Azure Portal - Simplified View of all resources</title>
<description><![CDATA[<p>My friend Eric Bogenschuetz first showed me about this hidden feature within the Azure portal. This is a borderline easter egg of some sort. Let's look at what it is and how to find it. </p>
<h2>The hidden feature</h2>
<p>The latest view to view all resources within a Resource Group or all resources for the whole environment within the Azure portal is very advanced with lots of async queries running behind the scene and a lot of features/filtering capabilities etc. Sometimes this view can take time to reflect the changes if there are lots of changes happening at the same time. You are required to refresh the whole page sometimes to circumvent this issue if after multiple refreshes the view still doesn't refresh for you. Luckily there is a &quot;<strong>Simplified View</strong>&quot; within the Azure portal that can tone down the optimizations and provide you with a very simplified view of all your resources.</p>
<h2>How to access it</h2>
<p>To access this hidden feature, simply click on the &quot;<strong>Refresh</strong>&quot; button 5 times within the Resource Group or All Resources in general. You will have to wait between each refresh to let it finish refreshing the view. When you will click the 5th time, instead of refreshing the page, the portal prompts you to switch to a simplified view, as shown below.</p>
<img src="/images/16486063276243bc7729ef7.png" alt="Prompt to switch">
<p>If you click on the blue button for &quot;Switch to simplified view&quot; you are shown the simplified view as shown below.</p>
<img src="/images/16486063366243bc80638e9.png" alt="Simplified view">
<p>Give it a try yourself. Let me know if you find this interesting.</p>]]></description>
<link>https://HarvestingClouds.com/post/an-easter-egg-or-hidden-feature-within-the-azure-portal-simplified-view-of-all-resources</link>
<pubDate>Sun, 27 Mar 2022 00:00:00 +0500</pubDate>
</item>
<item>
<title>How to create and use the Storage Event Trigger in Azure Data Factory</title>
<description><![CDATA[<p>In the Azure Data factory, you can create different types of triggers. Those are: </p>
<ul>
<li>Schedule, </li>
<li>Tumbling window, </li>
<li>Storage events, </li>
<li>Custom events. </li>
</ul>
<p>In this post, we will be talking about the <strong>Storage events</strong> type trigger.</p>
<p>Make sure the subscription is registered with the <strong>Event Grid resource provider</strong>, otherwise you will get an error while publishing the changes. How to register the event grid resource provider go to the link: <a href="https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/resource-providers-and-types" target="_blank">Register Azure resource Providers</a>.</p>
<h2>How to create a trigger from the portal</h2>
<p>Go to the author tab of the Azure Data Factory which is #1 in the screenshot and then select your main pipeline. </p>
<h3>Step 1</h3>
<p>Click on the ‘Add trigger’ then click on ‘New/edit’ to create the new trigger. From the Type dropdown, select the ‘Storage events’.</p>
<img src="/images/1648184947623d4e7305735.png" alt="01">
<p>The next step is to select the subscription, storage account, and the container name within that storage account.</p>
<img src="/images/1648184956623d4e7c202d6.png" alt="02">
<p>The next input parameters are “Blob path begins with” and “Blob path ends with properties” that allow you to specify the filters for the blob paths for which the events will trigger. One of the below properties needs to be defined for the storage event trigger to work.</p>
<img src="/images/1648184963623d4e83b1878.png" alt="03">
<p>Next, select the event on which you want to create the storage events trigger. You can select both or at least one type of event should be selected. If there are any blobs with 0 bytes then &quot;Ignore empty blobs&quot; will ignore those blobs to have a selection on them.</p>
<img src="/images/1648184970623d4e8a8e879.png" alt="04">
<p>Click Continue to create the preview of the changes on the trigger. If there would be any blobs in the mentioned container that matches the conditions provided while configuring the trigger then it will show all the blobs that satisfied those conditions. Click continue to finish the preview.</p>
<img src="/images/1648185016623d4eb82ae39.png" alt="05">
<p>If your pipeline has parameters, you can specify them on the &quot;Trigger runs parameter&quot; blade next. The storage event trigger captures the folder path and file name of the blob into the properties <code>@triggerBody().folderPath</code> and <code>@triggerBody().fileName</code>. To use the values of these properties in a pipeline, you must map the properties to pipeline parameters. After mapping the properties to parameters, you can access the values captured by the trigger through the <code>@pipeline().parameters.parameterName</code> expression throughout the pipeline.</p>
<img src="/images/1648185023623d4ebf82ab4.png" alt="06">
<p>To test the trigger, go to the container for which you have defined the trigger and upload any blob which matches with your ‘Blob path ends with’ filter. Once the upload is done, go to the monitor section of the Azure data factory and you will be able to see the trigger and pipeline run logs under the Trigger runs and Pipeline runs.</p>
<img src="/images/1648185030623d4ec683124.png" alt="07">
<p>Below you can view the Trigger runs section:</p>
<img src="/images/1648185130623d4f2a225e6.png" alt="08">
<p>Below you can view the Pipeline runs section:</p>
<img src="/images/1648185138623d4f321fcb6.png" alt="09">
<p>As you can see the storage events trigger is very easy to set up in Azure Data Factory and provide a very powerful tool in your arsenal to automate various complex scenarios.</p>]]></description>
<link>https://HarvestingClouds.com/post/how-to-create-and-use-the-storage-event-trigger-in-azure-data-factory</link>
<pubDate>Wed, 23 Mar 2022 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Script Samples - Series Index</title>
<description><![CDATA[<p>This blog is an <strong>Index</strong> of various blogs and related script samples in the series &quot;<strong>Azure Script Samples</strong>&quot;:</p>
<ol>
<li><a href="https://HarvestingClouds.com/post/script-sample-format-ea-billing-usage-csv-for-tags/" target="_blank">Format EA Billing Usage Csv for Tags</a></li>
<li><a href="https://HarvestingClouds.com/post/script-sample-export-all-azure-automation-account-runbooks-and-variables/" target="_blank">Export All Azure Automation Account Runbooks and Variables</a></li>
<li><a href="https://HarvestingClouds.com/post/script-sample-export-all-oms-log-analytics-saved-searches/" target="_blank">Export All OMS Log Analytics Saved Searches</a></li>
<li><a href="https://HarvestingClouds.com/post/script-sample-check-azure-site-recovery-asr-prerequisite-services/" target="_blank">Check Azure Site Recovery (ASR) Prerequisite Services</a></li>
<li><a href="https://HarvestingClouds.com/post/script-sample-set-azure-site-recovery-asr-prerequisite-services/" target="_blank">Set Azure Site Recovery (ASR) Prerequisite Services</a></li>
<li><a href="https://HarvestingClouds.com/post/updating-a-custom-rbac-role-in-azure/" target="_blank">Updating a Custom RBAC Role in Azure</a></li>
<li><a href="https://HarvestingClouds.com/post/script-sample-azure-automation-runbook-for-asr-recovery-plan/" target="_blank">Azure Automation - Runbook for ASR Recovery Plan</a></li>
<li><a href="https://HarvestingClouds.com/post/script-sample-azure-automation-get-vm-information-from-asr-recovery-plan-context/" target="_blank">Azure Automation - Get VM Information from ASR Recovery Plan Context</a></li>
<li><a href="https://HarvestingClouds.com/post/script-sample-azure-automation-sending-email-notification/" target="_blank">Azure Automation - Sending Email Notification</a></li>
<li><a href="https://HarvestingClouds.com/post/script-sample-azure-automation-get-vm-information-from-actual-azure-virtual-machine/" target="_blank">Azure Automation - Get VM Information from actual Azure Virtual Machine</a></li>
<li><a href="https://HarvestingClouds.com/post/script-sample-azure-automation-check-vm-availability/" target="_blank">Azure Automation - Check VM Availability</a></li>
<li><a href="https://HarvestingClouds.com/post/script-sample-apply-locks-on-various-azure-resources/" target="_blank">Apply Locks on Various Azure Resources</a></li>
<li><a href="https://HarvestingClouds.com/post/script-sample-check-for-pending-reboots-on-various-vms-in-your-environment/" target="_blank">Check for Pending Reboots on various VMs in your environment</a></li>
<li><a href="https://HarvestingClouds.com/post/script-sample-apply-rbac-role-to-users-on-resources/" target="_blank">Apply RBAC Role to Users on Resources</a></li>
<li><a href="https://HarvestingClouds.com/post/script-sample-getting-azure-resource-reports/" target="_blank">Getting Azure Resource Reports</a></li>
<li><a href="https://HarvestingClouds.com/post/script-sample-generate-azure-resources-report-by-tags-v30/" target="_blank">Generate Azure Resources Report by Tags v3.0</a></li>
<li><a href="https://HarvestingClouds.com/post/script-sample-set-tags-on-azure-resources/" target="_blank">Set Tags on Azure Resources</a></li>
<li><a href="https://HarvestingClouds.com/post/script-sample-creating-multiple-resource-groups-with-rbac-role-assignment/" target="_blank">Creating Multiple Resource Groups with RBAC Role Assignment</a></li>
<li><a href="https://HarvestingClouds.com/post/script-sample-checking-if-the-prompt-for-current-script-is-elevated-or-not/" target="_blank">Checking if the Prompt for current script is Elevated or not</a></li>
<li><a href="https://HarvestingClouds.com/post/script-sample-vm-operations-setting-up-the-vm-backup/" target="_blank">VM Operations - Setting up the VM Backup</a></li>
<li><a href="https://HarvestingClouds.com/post/script-sample-vm-operations-convert-vm-to-managed-disk-vm/" target="_blank">VM Operations - Convert VM to Managed Disk VM</a></li>
<li><a href="https://HarvestingClouds.com/post/script-sample-vm-operations-change-azure-vm-size/" target="_blank">VM Operations - Change Azure VM Size</a></li>
<li><a href="https://HarvestingClouds.com/post/script-sample-vm-operations-apply-hub-licensing-to-existing-vms/" target="_blank">VM Operations - Apply HUB Licensing to Existing VMs</a></li>
<li><a href="https://HarvestingClouds.com/post/script-sample-vm-operations-export-vm-configurations/" target="_blank">VM Operations - Export VM Configurations</a></li>
<li><a href="https://HarvestingClouds.com/post/script-sample-vm-operations-working-with-vm-snapshots-and-moving-the-managed-vms-across-subscriptions-and-regions/" target="_blank">VM Operations - Working with VM Snapshots and moving the Managed VMs across subscriptions and regions</a></li>
<li><a href="https://HarvestingClouds.com/post/script-sample-removing-locks-from-azure-resources/" target="_blank">Removing Locks from Azure Resources</a></li>
<li><a href="https://HarvestingClouds.com/post/script-sample-generate-report-for-route-tables-with-associated-subnets-and-related-information/" target="_blank">Generate Report for Route Tables with associated Subnets and related information</a></li>
<li><a href="https://HarvestingClouds.com/post/script-sample-disassociate-and-associate-subnets-to-route-tables/" target="_blank">Disassociate and Associate Subnets to Route Tables</a></li>
<li><a href="https://harvestingclouds.com/post/getting-a-report-for-all-vms-and-its-related-os-and-data-disks-code-sample/" target="_blank">Getting a report for all VMs and it's related OS and Data Disks</a></li>
<li><a href="https://harvestingclouds.com/post/automate-azure-route-table-testing-using-network-watcher-and-powershell-scripting-code-sample/" target="_blank">Automate Azure Route Table testing using Network Watcher and PowerShell scripting</a></li>
<li><a href="https://harvestingclouds.com/post/automatically-trigger-on-demand-azure-vm-backup-for-multiple-vms-code-sample/" target="_blank">Automatically trigger on-demand Azure VM Backup for multiple VMs</a></li>
<li><a href="https://harvestingclouds.com/post/automatically-trigger-on-demand-sap-hana-on-azure-vm-backup-for-multiple-databases-code-sample/" target="_blank">Automatically trigger on-demand SAP HANA on Azure VM Backup for multiple Databases</a></li>
<li><a href="https://harvestingclouds.com/post/moving-an-azure-virtual-machine-vm-to-an-availability-zone-the-automated-way-with-complete-code-sample/" target="_blank">Moving an Azure Virtual Machine (VM) to an Availability Zone the automated way</a></li>
<li><a href="https://harvestingclouds.com/post/export-all-custom-azure-policies-code-sample/" target="_blank">Export all custom Azure Policies</a></li>
<li><a href="https://harvestingclouds.com/post/automating-the-enabling-of-byos-licensing-for-all-red-hat-enterprise-linux-rhel-vms-in-azure-code-sample/" target="_blank">Automating the enabling of BYOS licensing for all Red Hat Enterprise Linux (RHEL) VMs in Azure</a></li>
<li><a href="https://harvestingclouds.com/post/automating-the-enabling-of-byos-licensing-for-all-suse-vms-in-azure-code-sample/" target="_blank">Automating the enabling of BYOS licensing for all SUSE VMs in Azure</a></li>
<li><a href="https://harvestingclouds.com/post/automate-locking-of-all-resources-of-a-particular-type-code-sample/" target="_blank">Automate locking of all resources of a particular type</a></li>
<li><a href="https://harvestingclouds.com/post/installing-enhanced-monitoring-agent-for-multiple-sap-workloads-in-azure-code-sample/" target="_blank">Installing Enhanced Monitoring Agent for Multiple SAP workloads in Azure</a></li>
</ol>]]></description>
<link>https://HarvestingClouds.com/post/azure-script-samples-series-index</link>
<pubDate>Sat, 19 Mar 2022 00:00:00 +0500</pubDate>
</item>
<item>
<title>Capacity Reservations experience now available within Azure Site Recovery (ASR)</title>
<description><![CDATA[<p>Capacity Reservations ensure that you get the capacity (in terms of CPU and memory) to start your VMs when you need those. This is a most common requirement when you are performing a failover in a disaster scenario. At that time, chances are that everyone else will also try to perform a failover and there can be a capacity crunch. This often results in &quot;Capacity Allocation&quot; errors and your VMs will not come up. The solution is to either select an alternate SKU (if that is available) or reserve the capacity beforehand. </p>
<p>Now, this experience is inbuilt into Azure Site Recovery (ASR). You have two places where you can set this up. </p>
<h2>During Enabling Replication</h2>
<p>When you are enabling the replication, you can select the Capacity Reservation Settings. You will be reserving the capacity for the target region where you will failover during a Disaster Recovery scenario.</p>
<img src="/images/164754634562338fe936985.png" alt="Enabling Replication Wizard">
<p>All you need to do is to select the Capacity Reservation Group. Click on the link above for the &quot;<em>View or Edit Capacity Reservation group assignment</em>&quot;. This will open up a window as shown below. Select the group and you are done.</p>
<img src="/images/164754635162338fefda5b5.png" alt="Capacity Reservers group selection">
<h2>For Already Replicated VMs</h2>
<p>For the VMs that are already being replicated, you also have the option to update the Capacity Reservation settings. To this navigate to your Recovery Services Vault. Then navigate to the &quot;Replicated items&quot; under &quot;Protected items&quot; -&gt; then select the VM for which you want to update the settings. Then navigate to the Compute settings under General. This is where you will see the &quot;Capacity Reservation Settings&quot;. </p>
<p>Similar to what you did during enabling the replication, you can select a Capacity Reservation Group here.</p>
<img src="/images/16475804966234155069e29.png" alt="Capacity Reservation under Compute Settings">
<p><strong>References</strong>: </p>
<ul>
<li><a href="https://docs.microsoft.com/en-us/azure/virtual-machines/capacity-reservation-overview" target="_blank">On-demand Capacity Reservation</a></li>
<li><a href="https://docs.microsoft.com/en-us/azure/site-recovery/azure-to-azure-common-questions#capacity" target="_blank">Why to use Capacity Reservations</a></li>
</ul>]]></description>
<link>https://HarvestingClouds.com/post/capacity-reservations-experience-now-available-within-azure-site-recovery-asr</link>
<pubDate>Mon, 14 Mar 2022 00:00:00 +0500</pubDate>
</item>
<item>
<title>Understanding Capacity Reservations in Azure</title>
<description><![CDATA[<p><strong>Capacity Reservations</strong> ensure that you get the capacity (in terms of CPU and memory) to start your VMs when you need those. This is a most common requirement when you are performing a failover in a disaster scenario. At that time, chances are that everyone else will also try to perform a failover and there can be a capacity crunch. This often results in &quot;Capacity Allocation&quot; errors and your VMs will not come up. The solution is to either select an alternate SKU (if that is available) or reserve capacity beforehand. </p>
<h2>Differences</h2>
<p><strong>Capacity Reservation</strong> is different from <strong>Reserved Instances</strong>. Here are the key differences:</p>
<ol>
<li><strong>Time considerations (Term)</strong> - Reserved Instance is a commitment to use a resource for a period of time. You get a discount for this commitment. E.g. 1-year or 3-year commitment on a VM. Capacity Reservation doesn't have a time/term commitment. It can be for any duration of time. The capacity is reserved and available immediately until the reservation is deleted.</li>
<li><strong>Linking</strong> - Reserved Instance is linked only to the VM size. The Capacity Reservation is defined with the combination of VM size, location, and quantity of instances to be reserved.</li>
<li><strong>Capacity Guarantee (SLA)</strong> - Reserver Instances don't guarantee the capacity. If the VM SKU is out of capacity you will get a Capacity Allocation error. Capacity Reservation guarantees capacity availability.</li>
<li><strong>Region vs Availability Zones</strong> - Capacity Reservations can be done for a region or an Availability Zone. Reserver Instances are available at the region level e.g. East US or Central US etc.</li>
<li><strong>Billing</strong> - Reserved Instances provide you a discount by committing to a term. Capacity Reservations cost you with pay-as-you-go rates for the underlying VM. More on this in the next section.</li>
</ol>
<h2>Pricing</h2>
<p>Capacity Reservations are priced at the same rate as the underlying VM size (with pay-as-you-go rates). E.g. You have 4 D4s_v3 VMs for which you reserve the capacity. Then you will be charged for 8 VMs. 4 running VMs (as per their usage) and 4 capacity reservations (regardless of whether you use these or not).</p>
<p>But if you have the Reserved Instances, then Reserved Capacity is provided free of cost.</p>
<p>Also, when you are using the capacity that you reserved via Capacity Reservations then there is no additional charge for the reservation (other than the VM cost for the up and running VM).</p>
<h2>Caveats</h2>
<ul>
<li>While creating the Capacity Reservation, the capacity should be available in the target region or Availability Zone, otherwise, the reservation will fail (just like VM creation).</li>
<li>Spot VMs and Azure Dedicated Host Nodes are not supported</li>
<li>Availability Sets are not supported</li>
<li>Deployment constraints like Proximity Placement Group, Update domains, and UltraSSD storage are also not supported</li>
<li>Only Av2, B, D, E, &amp; F VM series are supported right now. The SKUs will expand in near future.</li>
<li>Only the subscription that created the reservation can use it.</li>
<li>Reservations are only available to paid Azure customers</li>
</ul>
<p><strong>References</strong>: </p>
<ul>
<li><a href="https://docs.microsoft.com/en-us/azure/virtual-machines/capacity-reservation-overview" target="_blank">On-demand Capacity Reservation</a></li>
<li><a href="https://docs.microsoft.com/en-us/azure/site-recovery/azure-to-azure-common-questions#capacity" target="_blank">Why to use Capacity Reservations</a></li>
</ul>]]></description>
<link>https://HarvestingClouds.com/post/understanding-capacity-reservations-in-azure</link>
<pubDate>Sat, 12 Mar 2022 00:00:00 +0500</pubDate>
</item>
<item>
<title>Windows Admin Center in the Azure portal - 11 Automating deployment</title>
<description><![CDATA[<p>This blog is a part of the <strong>Windows Admin Center in the Azure portal</strong> series. You can find the <strong>Index</strong> of this series here: <a href="https://harvestingclouds.com/post/windows-admin-center-in-the-azure-portal-index/" target="_blank">Windows Admin Center in the Azure portal</a>.</p>
<p>In this post, we will look at how to automate the deployment and how to extend that to perform deployment on multiple VMs at once. You can automated via different ways including ARM templates and PowerShell.</p>
<h2>Automating with ARM Template</h2>
<p>When using ARM Templates, all you need to do is to deploy a resource of type &quot;<em>Microsoft.Compute/virtualMachines/extensions</em>&quot;. The extension name that you are deploying is &quot;<strong>AdminCenter</strong>&quot; and the publisher for the extension is &quot;<strong>Microsoft.AdminCenter</strong>&quot;. Extension type is also &quot;<strong>AdminCenter</strong>&quot;.</p>
<p>The latest template can be found here: <a href="https://docs.microsoft.com/en-us/windows-server/manage/windows-admin-center/azure/manage-vm#automate-windows-admin-center-deployment-using-an-arm-template" target="_blank">Automate Windows Admin Center deployment using an ARM template</a></p>
<h2>Automating with PowerShell</h2>
<p>With PowerShell you are performing the below 3 operations to install the extension:</p>
<ol>
<li>Getting the Network Security Group and adding the outbound rule to allow HTTPS traffic on port 443 to the WindowsAdminCenter Service Tag</li>
<li>Getting the Network Security Group and adding the inbound rule to allow traffic on the Windows Admin Center management port for allowed (or all) IP addresses. You should either not have this rule in case of connectivity on a private IP address. Or lock it down to specific IPs in case of connectivity via Public IP addresses.</li>
<li>Installing the &quot;<strong>AdminCenter</strong>&quot; extension using the &quot;<strong>Set-AzVMExtension</strong>&quot; PowerShell cmdlet.</li>
</ol>
<p>Use the below PowerShell script from official Microsoft documentation:</p>
<pre><code>$resourceGroupName = &lt;get VM's resource group name&gt;
$vmLocation = &lt;get VM location&gt;
$vmName = &lt;get VM name&gt;
$vmNsg = &lt;get VM's primary nsg&gt;
$salt = &lt;unique string used for hashing&gt;

$wacPort = "6516"
$Settings = @{"port" = $wacPort; "salt" = $salt}

# Open outbound port rule for WAC service
Get-AzNetworkSecurityGroup -Name $vmNsg -ResourceGroupName $resourceGroupName | Add-AzNetworkSecurityRuleConfig -Name "PortForWACService" -Access "Allow" -Direction "Outbound" -SourceAddressPrefix "VirtualNetwork" -SourcePortRange "*" -DestinationAddressPrefix "WindowsAdminCenter" -DestinationPortRange "443" -Priority 100 -Protocol Tcp | Set-AzNetworkSecurityGroup

# Open inbound port rule on VM to be able to connect to WAC
Get-AzNetworkSecurityGroup -Name $vmNsg -ResourceGroupName $resourceGroupName | Add-AzNetworkSecurityRuleConfig -Name "PortForWAC" -Access "Allow" -Direction "Inbound" -SourceAddressPrefix "*" -SourcePortRange "*" -DestinationAddressPrefix "*" -DestinationPortRange $wacPort -Priority 100 -Protocol Tcp | Set-AzNetworkSecurityGroup

# Install VM extension
Set-AzVMExtension -ResourceGroupName $resourceGroupName -Location $vmLocation -VMName $vmName -Name "AdminCenter" -Publisher "Microsoft.AdminCenter" -Type "AdminCenter" -TypeHandlerVersion "0.0" -settings $Settings</code></pre>
<h2>Extending PowerShell to deploy on multiple VMs</h2>
<p>You can extend the PowerShell or the ARM Template to deploy the Windows Admin Center to multiple VMs. You can even leverage Azure DevOps pipelines to deploy these across different environments. I provide the below sample to deploy via PowerShell on multiple VMs in Resource Groups that are identified via a wildcard search. You can modify this script as per your requirements. </p>
<p>You can find the complete up to date script at GitHub here: <a href="https://raw.githubusercontent.com/HarvestingClouds/PowerShellSamples/master/Scripts/Enable-WindowsAdminCenter/Enable-WindowsAdminCenter.ps1" target="_blank"><a href="https://raw.githubusercontent.com/HarvestingClouds/PowerShellSamples/master/Scripts/Enable-WindowsAdminCenter/Enable-WindowsAdminCenter.ps1">https://raw.githubusercontent.com/HarvestingClouds/PowerShellSamples/master/Scripts/Enable-WindowsAdminCenter/Enable-WindowsAdminCenter.ps1</a></a></p>
<pre><code>#Author: Aman Sharma @ http://HarvestingClouds.com

#Variables
$subscriptionName = "Your Subscription Name"
$salt = "&lt;unique string used for hashing&gt;"
$wacPort = "6516"
$Settings = @{"port" = $wacPort; "salt" = $salt}

try
{
    #Setting the Azure context
    $env = Get-AzEnvironment -Name "AzureCloud"
    Connect-AzAccount -Environment $env
    Set-AzContext -SubscriptionName $subscriptionName

    #Selecting all RGs that begins with the text. Notice the wildcard in the name
    #Update this as per your requirements
    $allRequiredRGs = Get-AzResourceGroup -Name "RG-*"

    #Iterating on the Resource Groups
    foreach($currentRG in $allRequiredRGs)
    {
        #Fetch all VMs in the current Resource Group
        $currentRGName = $currentRG.ResourceGroupName
        $VMs = Get-AzVM -ResourceGroupName $currentRGName

        #Iterating on all the VMs
        foreach ($vm in $VMs) 
        {
            $vmLocation = $vm.Location
            $vmName = $vm.Name

            #Finding VM's NSG dynamically
            $vmNsgId = (Get-AzNetworkInterface -ResourceId $vm.NetworkProfile.NetworkInterfaces.Id).NetworkSecurityGroup.Id
            $vmNsg = Get-AzResource -ResourceId $vmNsgId
            $vmNsgName = $vmNsg.Name

            # Open outbound port rule for WAC service
            $vmNsg | Add-AzNetworkSecurityRuleConfig -Name "PortForWACService" -Access "Allow" -Direction "Outbound" -SourceAddressPrefix "VirtualNetwork" -SourcePortRange "*" -DestinationAddressPrefix "WindowsAdminCenter" -DestinationPortRange "443" -Priority 100 -Protocol Tcp | Set-AzNetworkSecurityGroup

            # Install VM extension
            Set-AzVMExtension -ResourceGroupName $currentRGName -Location $vmLocation -VMName $vmName -Name "AdminCenter" -Publisher "Microsoft.AdminCenter" -Type "AdminCenter" -TypeHandlerVersion "0.0" -settings $Settings

            # Open inbound port rule on VM to be able to connect to WAC
            $vmNsg | Add-AzNetworkSecurityRuleConfig -Name "PortForWAC" -Access "Allow" -Direction "Inbound" -SourceAddressPrefix "*" -SourcePortRange "*" -DestinationAddressPrefix "*" -DestinationPortRange $wacPort -Priority 100 -Protocol Tcp | Set-AzNetworkSecurityGroup

        }

    }
}
catch 
{
    Write-Host -ForegroundColor Red "Error while installing extension."
    $Error[0]
    Write-Host -ForegroundColor Red "Error occured at:"
    $Error[0].InvocationInfo.PositionMessage
}</code></pre>]]></description>
<link>https://HarvestingClouds.com/post/windows-admin-center-in-the-azure-portal-11-automating-deployment</link>
<pubDate>Tue, 08 Mar 2022 00:00:00 +0500</pubDate>
</item>
<item>
<title>Windows Admin Center in the Azure portal - 10 Troubleshooting common issues</title>
<description><![CDATA[<p>This blog is a part of the <strong>Windows Admin Center in the Azure portal</strong> series. You can find the <strong>Index</strong> of this series here: <a href="https://harvestingclouds.com/post/windows-admin-center-in-the-azure-portal-index/" target="_blank">Windows Admin Center in the Azure portal</a>.</p>
<p>In this post, we will look at troubleshooting common issues with the Windows Admin Center.</p>
<h2>1. Dealing with the Security prompt while connecting to the Windows Admin Center</h2>
<p>If you get a security prompt while connecting to the Windows Admin Center, then click on the &quot;Advanced&quot; button and then click on the link as shown below to continue.</p>
<img src="/images/1647387071623121bfddaae.png" alt="Security Prompt">
<p><strong>PRO TIP</strong>: If you are not able to see the link or the complete text after clicking on the &quot;Advanced&quot; button, press F11 to enter the full-screen mode. This should make the entire text and the link visible. Another way is to set the zoom level to a low number to ensure the whole text is visible.</p>
<h2>2. Tools Not loading or Not able to connect</h2>
<p>If when trying to access the tools you keep getting the below spinning circle. </p>
<img src="/images/16474124956231850fab392.png" alt="Spinning wheel">
<p>And then it keeps turning into &quot;<strong>couldn't load</strong>&quot; errors.</p>
<img src="/images/1647412504623185187a091.png" alt="Couldn't load error">
<h3>Solution</h3>
<p>Navigate to other tools and then back to the required tool to check if the issue is not limited to the required tool or is across the board for all other tools. If this issue is prevalent with all the tools then try the following steps.</p>
<ul>
<li><strong>Step #1</strong> - Try refreshing the browser and connecting again to the Windows Admin Center. I know it sounds basic but this saves you from unnecessary troubleshooting.</li>
<li><strong>Step #2</strong> - Make sure that the VM is in the up and running state. We have seen this issue multiple times in our environment.</li>
<li><strong>Step #3</strong> - Make sure that the NSG rules have not been altered and that the network connectivity is still there. If you are connecting from a different location than before, make sure that the inbound connectivity is allowed from your new location to the VM. Network connectivity is another big reason for connectivity failures.</li>
<li><strong>Step #4</strong> - Make sure that the Windows Admin Center service is running on your VM. RDP to the VM and check if ServerManagementGateway / Windows Admin Center service is in the Running state or not.</li>
<li><strong>Step #5</strong> - Make sure you are connecting from Microsoft Edge or Chrome and that you are not using incognito mode.</li>
</ul>
<p>In most cases, step #2 or #3 will help you resolve the connectivity issues.</p>
<h2>3. Windows Admin Center extension not installing</h2>
<p>The most usual reason for this error is that your VM doesn't meet the requirements for the Windows Admin Center. Make sure that you are running the Windows Server 2016, 2019 or 2022 as the operating system on the VM and that you have a minimum of 3 GiB memory.</p>
<p>Review all the requirements here: <a href="https://docs.microsoft.com/en-us/windows-server/manage/windows-admin-center/azure/manage-vm#requirements" target="_blank">Windows Admin Center requirements</a></p>
<p>The networking restrictions in your environment are another main cause of causing installation issues. If there are User Defined Routes (via Route Tables) and an Azure Firewall or any Network Virtual Appliance that may cause unsupported configuration.</p>]]></description>
<link>https://HarvestingClouds.com/post/windows-admin-center-in-the-azure-portal-10-troubleshooting-common-issues</link>
<pubDate>Mon, 07 Mar 2022 00:00:00 +0500</pubDate>
</item>
<item>
<title>Windows Admin Center in the Azure portal - 9 Securing inbound connectivity</title>
<description><![CDATA[<p>This blog is a part of the <strong>Windows Admin Center in the Azure portal</strong> series. You can find the <strong>Index</strong> of this series here: <a href="https://harvestingclouds.com/post/windows-admin-center-in-the-azure-portal-index/" target="_blank">Windows Admin Center in the Azure portal</a>.</p>
<h2>The security concern</h2>
<p>As we saw in one of the previous posts, when setting up Windows Admin Center it opens the inbound connectivity on the port specified for &quot;Any&quot; Source. This is shown in the Inbound security rules of the Network Security Group (NSG) for the VM.</p>
<img src="/images/1647383498623113ca4332d.png" alt="Inbound Security Rule">
<p>This is a <strong>potential security risk</strong> as it leaves the system vulnerable and the port open for access for all attackers on the public internet.</p>
<h2>The Solution #1</h2>
<p>The solution is very simple. The solution is to lock it down to only a specific IP address or a range of IP addresses. This limits which IP addresses can access Windows Admin Center on the VM. </p>
<p>To be able to do this, you will need to update the Inbound rule on the Network Security Group as shown above and then update the &quot;<strong>Source</strong>&quot; for the rule. Instead of &quot;<strong>Any</strong>&quot; change that to the public IP address of the VM from where you are connecting to the Windows Admin Center (within Azure portal). If you know a range of IPs that you will be using then you can specify the range instead.</p>
<h2>How to find your current Public IP address</h2>
<p>Your ISP will provide you with a dynamic public IP address (unless you purchased a static public IP). To be able to find your public IP address just search for &quot;what is my IP address&quot; and you will find many third-party websites that will show you your current public IP address and possible geographic location. One such website is provided below, just as a reference:</p>
<p><a href="https://www.whatismyip.com/" target="_blank"><a href="https://www.whatismyip.com/">https://www.whatismyip.com/</a></a></p>
<h2>The Solution #2</h2>
<p>The second solution is to simply remove the inbound NSG rule and allow access only on the private IP address. This would mean that you would need to have connectivity to the VM's virtual network from where you are trying to access the Windows Admin Center on the VM (via Azure portal). </p>
<p>This is a much more secure solution and ensures that there is no connectivity allowed from the public IP to the VM.</p>]]></description>
<link>https://HarvestingClouds.com/post/windows-admin-center-in-the-azure-portal-9-securing-inbound-connectivity</link>
<pubDate>Sun, 06 Mar 2022 00:00:00 +0500</pubDate>
</item>
<item>
<title>Windows Admin Center in the Azure portal - 8 Caveats</title>
<description><![CDATA[<p>This blog is a part of the <strong>Windows Admin Center in the Azure portal</strong> series. You can find the <strong>Index</strong> of this series here: <a href="https://harvestingclouds.com/post/windows-admin-center-in-the-azure-portal-index/" target="_blank">Windows Admin Center in the Azure portal</a>.</p>
<p>In the previous posts, we looked at different aspects of the Windows Admin Center from setting it up, to configuring it, and using various tools in it. In this post, we will look at various caveats and know limitations of this product.</p>
<h2>1. Usage with Virtual Appliances</h2>
<p>Inbound connectivity being redirected by another service (i.e. Network Virtual Appliances, Azure Firewall, etc.) is not supported. You must have inbound connectivity from the Azure portal to one of the direct IP addresses of your VM on the port Windows Admin Center is installed. </p>
<p>Network Virtual Appliance could be a third-party firewall like Palo Alto, Check Point, etc. Azure Firewall, a cloud-native network security service, is another solution that can exist between the communication. If via User Defined Routes (within Route Tables) or any other way the communication is not direct then the Windows Admin Center will not work. Hopefully, this restriction will be removed in near future.</p>
<h2>2. Extending Windows Admin Center to Other VMs</h2>
<p>Microsoft doesn't support extensions to Windows Admin Center in the Azure portal at the time of writing this post. If you manually installed Windows Admin Center in the VM to manage multiple systems, installing this VM extension reduces the functionality to managing just the VM in which the extension is installed. You will need to uninstall the extension to get back the functionality of managing multiple VMs at once.</p>
<h2>3. Usage with Non-public cloud</h2>
<p>Windows Admin Center is currently supported only on the Azure public cloud. It is not supported in Azure China, Azure Government, or other non-public clouds.</p>
<h2>4. Supported Browsers/Apps</h2>
<p>Windows Admin Center is currently supported only on the Microsoft Edge or Chrome browsers. Chrome Incognito mode isn't supported for now. Also, the Azure portal desktop app is not supported.</p>
<p>Reference: <a href="https://docs.microsoft.com/en-us/windows-server/manage/windows-admin-center/azure/manage-vm" target="_blank">Use Windows Admin Center in the Azure portal to manage a Windows Server VM</a></p>]]></description>
<link>https://HarvestingClouds.com/post/windows-admin-center-in-the-azure-portal-8-caveats</link>
<pubDate>Sat, 05 Mar 2022 00:00:00 +0500</pubDate>
</item>
<item>
<title>Windows Admin Center in the Azure portal - 7 Various Tools - Part 2</title>
<description><![CDATA[<p>This blog is a part of the <strong>Windows Admin Center in the Azure portal</strong> series. You can find the <strong>Index</strong> of this series here: <a href="https://harvestingclouds.com/post/windows-admin-center-in-the-azure-portal-index/" target="_blank">Windows Admin Center in the Azure portal</a>.</p>
<p>In the previous post, we looked at the first 9 tools in the Windows Admin Center. In this post, we will explore more tools present in it. These tools are what provide server management functionality.</p>
<h2>10. Processes</h2>
<p>With Processes, you can view all the processes on the VM. You can also start a new process or end an existing one.</p>
<img src="/images/1647400410623155dad04b6.png" alt="Processes">
<h2>11. Registry</h2>
<p>In the Registry tool, you can view the existing registries. You can also export or create a new registry.</p>
<img src="/images/1647400418623155e2dc812.png" alt="Registry ">
<h2>12. Remote Desktop</h2>
<p>From the remote desktop, you can connect to the VM and perform VM operations (that you can't from the Windows admin center. First, you provide the credentials and connect to the VM.</p>
<img src="/images/1647400433623155f19644a.png" alt="Remote Desktop - Connection Wizard">
<p>Then, once you are connected you can perform operations on the VM, or disconnect. You also have the option to send the Ctrl+Alt+Del keyboard combination to the VM.</p>
<img src="/images/1647400426623155ea7d2e6.png" alt="Remote Desktop - Connect Experience">
<h2>13. Roles &amp; features</h2>
<p>From roles and features, you can view current roles and features and also add/install new ones.</p>
<img src="/images/1647401197623158ed1a8fa.png" alt="Roles & features">
<h2>14. Scheduled tasks</h2>
<p>With scheduled tasks, you can view the &quot;Task Scheduler Library&quot;. Here you can view all the existing tasks or you can create new tasks. You can also enable and disable existing tasks.</p>
<img src="/images/1647401208623158f86de42.png" alt="Scheduled tasks">
<h2>15. Security</h2>
<p>Security is where you can access the Virus and threat protection. You can create or schedule a new scan and view the protection history. You can schedule daily or weekly scans here.</p>
<img src="/images/16474012186231590256f12.png" alt="Security">
<h2>16. Services</h2>
<p>The Services tool is similar to the &quot;services.msc&quot; management tool on the VM. It lets you manage all the services. You can Start and Stop any service. You can also change different settings for a service like a startup mode, log-on account, recovery options, etc.</p>
<img src="/images/16474012276231590b59444.png" alt="Services ">
<h2>17. Storage</h2>
<p>Storage tool is similar to disk management. You can view all the disks and volumes in the VM here. You can also create a new volume or initialize a disk here. Imagine you added or upgraded a disk in the Azure portal. You can then visit this tool and initialize that new disk and create a volume from the same.</p>
<img src="/images/164740127162315937ae87f.png" alt="Storage ">
<h2>18. Updates</h2>
<p>Updates tool lets you manage the updates on the VM. You can view available updates and also trigger the installation of the same. You can also view the Update history.</p>
<img src="/images/16474012786231593e78d56.png" alt="Updates ">
<p>These are all the tools that are available in the Windows Admin Center today. The best way to learn about each is to play around and experiment with these tools on a demo VM.</p>
<p>In the next post, we will look at some caveats with the Windows Admin Center.</p>]]></description>
<link>https://HarvestingClouds.com/post/windows-admin-center-in-the-azure-portal-7-various-tools-part-2</link>
<pubDate>Fri, 04 Mar 2022 00:00:00 +0500</pubDate>
</item>
<item>
<title>Windows Admin Center in the Azure portal - 6 Various Tools - Part 1</title>
<description><![CDATA[<p>This blog is a part of the <strong>Windows Admin Center in the Azure portal</strong> series. You can find the <strong>Index</strong> of this series here: <a href="https://harvestingclouds.com/post/windows-admin-center-in-the-azure-portal-index/" target="_blank">Windows Admin Center in the Azure portal</a>.</p>
<p>In the previous post, we looked at various sections of the Windows Admin Center. In this post, we will explore the different tools present in it. These tools are what provide server management functionality. Without waiting let's jump right into it.</p>
<h2>1. Certificates</h2>
<p>The certificates section allows you to review and manage the certificates on the VM. It shows you how many have expired and how many are healthy. It also allows you to view various certificates and to import certificates into a particular store. You can also view certificate-related events.</p>
<img src="/images/1647396946623148520f69b.png" alt="Certificates">
<h2>2. Devices</h2>
<p>The devices tool is basically your &quot;Device Manager&quot;. It lets you view and enable or disable devices. It also lets you update the driver for the selected device. You can view details regarding each device and it's driver e.g. driver date and version, etc.</p>
<img src="/images/16473969626231486213bbc.png" alt="Devices">
<h2>3. Events</h2>
<p>The events tool is essentially your &quot;event viewer&quot;. It lets you view the following types of events:</p>
<ol>
<li>Administrative logs</li>
<li>Windows logs</li>
<li>Applications and services logs</li>
</ol>
<p>You can also export or clear the logs from here as well.</p>
<img src="/images/16473969826231487688ad6.png" alt="Events">
<h2>4. Files &amp; file sharing</h2>
<p>In the Files and file sharing you can manage the files on the VM. With Files, you can browse to the files, rename these, create new folders, delete files, move these, upload files, etc. With file shares, you can view different file shares, create new shares, edit the settings of the shares, grant permissions, enable SMB encryption, etc.</p>
<img src="/images/1647396994623148825677e.png" alt="Files & file sharing">
<h2>5. Firewall</h2>
<p>The firewall tool lets you manage the firewall on the VM. It lets you view the firewall status for the Domain, Private and Public areas. It also lets you view the inbound and outbound rules. You can also create new rules for the incoming and outgoing traffic here.</p>
<img src="/images/1647397044623148b49ccb6.png" alt="Firewall">
<h2>6. Installed apps</h2>
<p>In the installed apps, you can view all the installed apps. You can also remove any app directly from here as well.</p>
<img src="/images/1647397055623148bf03de1.png" alt="Installed apps">
<h2>7. Local users &amp; groups</h2>
<p>The tool for local users and groups you can manage all users and groups. For users, you can view current users, manage their memberships and create new local users. Under groups, you can view groups, add users to the existing groups, create new groups, delete existing groups, etc.</p>
<img src="/images/164739748262314a6a3750f.png" alt="Local users & groups">
<h2>8. Performance Monitor</h2>
<p>In performance monitor, you need to create a workspace to view the performance counters. </p>
<img src="/images/164739749362314a75563ea.png" alt="Performance Monitor">
<p>You can view all the counters that you can locally on the laptop. You can save the settings once you have set these up for easy viewing next time.</p>
<img src="/images/164739750362314a7f3cb2e.png" alt="Performance Monitor - Workspace">
<h2>9. PowerShell</h2>
<p>The PowerShell tool lets you connect to a PowerShell session as if you are logged into the VM. You can run all the PowerShell commands as if you are inside the VM.</p>
<img src="/images/164739751462314a8aa4343.png" alt="PowerShell">
<p>In the next post, we will continue looking at more tools within Windows Admin Center.</p>]]></description>
<link>https://HarvestingClouds.com/post/windows-admin-center-in-the-azure-portal-6-various-tools-part-1</link>
<pubDate>Thu, 03 Mar 2022 00:00:00 +0500</pubDate>
</item>
<item>
<title>Windows Admin Center in the Azure portal - 5 Working with it</title>
<description><![CDATA[<p>This blog is a part of the <strong>Windows Admin Center in the Azure portal</strong> series. You can find the <strong>Index</strong> of this series here: <a href="https://harvestingclouds.com/post/windows-admin-center-in-the-azure-portal-index/" target="_blank">Windows Admin Center in the Azure portal</a>.</p>
<p>In the last post, we saw how to connect to the <strong>Windows Admin Center</strong>. In this post, we will explore it in detail. It helps you to manage the Windows Server directly from the Azure portal similar to the on-premises version of the Windows Admin Center. </p>
<h2>Getting Familiar with the UI</h2>
<p>Let's start by getting familiar with the User Interface (UI) of the Windows Admin Center.</p>
<img src="/images/16473882546231265ee4848.png" alt="Windows Admin Center UI">
<p>As you see above, the various parts of the Windows Admin Center are as below. Note that the line item corresponds to the number in the above screenshot.</p>
<ol>
<li>At the top you select the <strong>connection type</strong>. You have <strong>Server Manager</strong> selected. Other choices include Computer Management and Cluster Management. These choices are not relevant right now but will be helpful in the near future to manage the Computers and Clusters.</li>
<li>The main page can be accessed from the <strong>Overview</strong> screen. This is the default screen that is presented when you open Windows Admin Center. We will discuss more on this in points #5 and #6 below.</li>
<li>There are <strong>various tools</strong> in this section. Use the vertical scroll bar to view all of these tools.</li>
<li>You can view different <strong>Settings on the VM</strong> from here.</li>
<li>You can use the <strong>Connect</strong> option in the Overview page to either do Remote Desktop or open a PowerShell session to the computer.</li>
<li>The main page for the <strong>Overview</strong> provides lots of detail for the VM including internal computer name, domain, operating system, version, memory, disk space, processor, uptime, etc. This also provides the graph for CPU, memory, and network usage.</li>
<li>This option shows you various <strong>PowerShell script samples</strong> for the tool that you have selected from the #3 section above. E.g. for the Installed apps section, it shows you commonly used scripts like enabling and disabling features, removing apps, etc.</li>
<li>This is the <strong>notification</strong> section for the whole Windows Admin Center</li>
<li>This is the <strong>Settings</strong> section for the whole Windows Admin Center which includes accounts management (with which you connected to it), language/region selection, personalization (selection of light or dark modes), and various other settings.</li>
</ol>
<h2>Using Azure services in Windows Admin Center</h2>
<p>To be able to use Azure services in the Windows Admin Center you need to register the gateway with Azure. You can do so by navigating to the settings and then going to Account i.e. the first setting. Then click on the link for &quot;<em>Register with Azure</em>&quot;.</p>
<img src="/images/164738927262312a588b93e.png" alt="Register Gateway with Azure">
<h2>VM Settings</h2>
<p>Navigate to the VM settings by clicking on the Settings under the tools section (on the left bottom corner). Here you can manage the below settings:</p>
<ol>
<li>File Shares (SMB Server) settings</li>
<li>Environment Variables - both user and system variables</li>
<li>Power Configurations</li>
</ol>
<img src="/images/164739386162313c45e5c12.png" alt="VM Level Settings">
<p>You can not only view the settings but also add/update the settings. E.g. you can create a new environment variable for the user or system. Or you can update the power configuration to a balanced, high-performance, or power saver.</p>
<h2>Windows Admin Center Settings</h2>
<p>The Windows Admin Center Settings can be accessed by clicking on the gear icon on the top right section. These settings are clubbed into 3 categories which include:</p>
<ol>
<li>User settings</li>
<li>Development settings</li>
<li>Gateway settings</li>
</ol>
<img src="/images/164739458862313f1c3bfba.png" alt="Windows Admin Center Settings">
<p>User settings deal with the account, language, region, personalization-related settings. This is where you select if you want the default light mode or the dark mode.</p>
<p>The development settings section is where you can enable the toolset to build extensions for the Windows Admin Center itself. It allows you to collect performance data as well.</p>
<p>The gateway section allows you to control the gateway settings i.e. how you connect to the Windows Admin Center.</p>
<p>In the next post, we will look at various tools present in the Windows Admin Center.</p>]]></description>
<link>https://HarvestingClouds.com/post/windows-admin-center-in-the-azure-portal-5-working-with-it</link>
<pubDate>Wed, 02 Mar 2022 00:00:00 +0500</pubDate>
</item>
<item>
<title>Windows Admin Center in the Azure portal - 4 Connecting to it</title>
<description><![CDATA[<p>This blog is a part of the <strong>Windows Admin Center in the Azure portal</strong> series. You can find the <strong>Index</strong> of this series here: <a href="https://harvestingclouds.com/post/windows-admin-center-in-the-azure-portal-index/" target="_blank">Windows Admin Center in the Azure portal</a>.</p>
<p>In the previous blogs, we saw how to set up the <strong>Windows Admin Center</strong> and looked at what happens under the hood in setting it up. Now let's look at how to connect to it.</p>
<h2>Connecting</h2>
<p>To connect to the Windows Admin Center: </p>
<ol>
<li>Navigate to the VM, then under the <strong>Settings</strong> click on the &quot;<strong>Windows Admin Center</strong>&quot;. </li>
<li>In the middle of the blade, select the public or private IP address of the VM to connect to.</li>
<li>And then click on the &quot;<strong>Connect</strong>&quot; button.</li>
</ol>
<img src="/images/164738651362311f9150f66.png" alt="Connect to Windows Admin Center">
<p>Next, it will ask you to log in to Windows Admin Center using your machine's admin credentials. Note these are the local machine admin credentials (and not the Azure portal user credentials).</p>
<img src="/images/1647386801623120b13805f.png" alt="Sign in">
<p>If you get a security prompt, then click on &quot;Advanced&quot; and then click on the link as shown below to continue.</p>
<img src="/images/1647387071623121bfddaae.png" alt="Security Prompt">
<p><strong>PRO TIP</strong>: If you are not able to see the link or the complete text after clicking on the &quot;Advanced&quot; button, press F11 to enter the full-screen mode. This should make the entire text and the link visible. Another way is to set the zoom level to a low number to ensure the whole text is visible.</p>
<p>Finally, click on the gateway to connect.</p>
<img src="/images/164738723862312266b6596.png" alt="Windows Admin Center Gateway">
<p>You are now finally connected with the Windows Admin Center. Explore the different options available to you here to manage your server.</p>
<img src="/images/16473874856231235dd79bd.png" alt="Windows Admin Center">
<p>We will explore these options in detail in the upcoming posts.</p>]]></description>
<link>https://HarvestingClouds.com/post/windows-admin-center-in-the-azure-portal-4-connecting-to-it</link>
<pubDate>Tue, 01 Mar 2022 00:00:00 +0500</pubDate>
</item>
<item>
<title>Windows Admin Center in the Azure portal - 3 Under the hood</title>
<description><![CDATA[<p>This blog is a part of the <strong>Windows Admin Center in the Azure portal</strong> series. You can find the <strong>Index</strong> of this series here: <a href="https://harvestingclouds.com/post/windows-admin-center-in-the-azure-portal-index/" target="_blank">Windows Admin Center in the Azure portal</a>.</p>
<p>In the previous post, we looked at how to set up the Windows Admin Center. In this post, we will take a look at what the setup performed under the hood.</p>
<p>There are various actions that are performed when installing the Windows Admin Center in the Azure portal. The primary actions are:</p>
<ul>
<li>Updating the NSG (Network Security Group) for the VM to open inbound and outbound traffic</li>
<li>Installation of the relevant extension</li>
</ul>
<p>Let's look at these in detail.</p>
<img src="/images/1647382583623110376c860.png" alt="Installation Notifications">
<h2>Extension Added</h2>
<p>The setup installs the extension for the Admin Center as shown below. The <strong>name</strong> of the extension is &quot;AdminCenter and the <strong>type</strong> of the extension is &quot;Microsoft.AdminCenter.AdminCenter&quot;. Make sure that the status is set to &quot;Provisioning succeeded&quot;. If not, then you won't be able to connect to the Windows Admin Center.</p>
<img src="/images/16473828796231115f03049.png" alt="Extension for Admin Center">
<h2>Outbound rule added in the NSG</h2>
<p>Next, navigate to the NSG of the VM and look at the outbound rules. You will find that the setup added an outbound rule allowing the TCP traffic on port 443 i.e. HTTPS to the Service Tag for Windows Admin Center. The priority is set to the smallest i.e. 100 so that this rule will be evaluated first.</p>
<img src="/images/1647383471623113afbfbd7.png" alt="Outbound rule">
<h2>Inbound rule added in the NSG</h2>
<p>Next, navigate to the NSG of the VM and look at the inbound rules. If you selected, the setup opened the management port e.g. 6516 for Windows Admin Center connectivity to the VM for any protocol, any source, and any destination. The priority is set to the smallest i.e. 100 so that this rule will be evaluated first.</p>
<p>NOTE: This is only good for testing. In a production scenario, you should either restrict this rule to a particular IP address or make sure you have connectivity to the virtual network where the VM is deployed.</p>
<img src="/images/1647383498623113ca4332d.png" alt="Inbound rule">
<h2>Communication details</h2>
<p>Traffic from the Azure portal to Windows Admin Center running on your VM uses HTTPS. Therefore all traffic is encrypted. Your Azure VM is managed using PowerShell and WMI over WinRM.</p>
<h2>Implementation details</h2>
<p>As we saw Windows Admin Center is installed via an extension on the VM. From official documentation this is how the extension provides the functionality: This extension connects to an external service that manages certificates and DNS records so that you can easily connect to your VM. Each Azure VM that uses the Windows Admin Center extension gets a public DNS record that Microsoft maintains in Azure DNS. We hash the record name with a salt to anonymize the VM's IP address when saving it in DNS - the IP addresses aren't saved in plain text in DNS. This DNS record is used to issue a certificate for Windows Admin Center on the VM, enabling encrypted communication with the VM.</p>
<p>Reference: <a href="https://docs.microsoft.com/en-us/windows-server/manage/windows-admin-center/azure/manage-vm" target="_blank">Use Windows Admin Center in the Azure portal to manage a Windows Server VM</a></p>
<p>Now that we know how it works, in the next post, we will start working with the Windows Admin Center to manage our servers.</p>]]></description>
<link>https://HarvestingClouds.com/post/windows-admin-center-in-the-azure-portal-3-under-the-hood</link>
<pubDate>Mon, 28 Feb 2022 00:00:00 +0500</pubDate>
</item>
<item>
<title>Windows Admin Center in the Azure portal - 2 Setting it up</title>
<description><![CDATA[<p>This blog is a part of the <strong>Windows Admin Center in the Azure portal</strong> series. You can find the <strong>Index</strong> of this series here: <a href="https://harvestingclouds.com/post/windows-admin-center-in-the-azure-portal-index/" target="_blank">Windows Admin Center in the Azure portal</a>.</p>
<p>In the previous blog, we talked about what is Windows Admin Center. In this blog, we will look at setting it up in the Azure portal.</p>
<h2>Setup</h2>
<p>The setup for Windows Admin Center is very easy and most of the stuff is automated for you. Just navigate to the VM after ensuring that it meets the pre-requisites (as detailed in the previous blog). It can only be from one of these OS: Windows Server 2016, Windows Server 2019 or Windows Server 2022 and needs to have at least 3 GiB of memory. Right now it is available only in the Azure public cloud.</p>
<p>Follow these steps to set it up:</p>
<ol>
<li>Under settings, navigate to the option for &quot;Windows Admin Center&quot; as shown below.</li>
<li>Select the Inbound port for connectivity to the Windows Admin Center. Also, select the first check box for opening inbound connectivity to this port. <strong>NOTE</strong>: This is only required if you don't have connectivity to the virtual network of the VM. Select the second check box to open the outbound connectivity for Windows Admin Center to install.</li>
<li>Once selections are made, click on the Install button.</li>
</ol>
<img src="/images/164738159262310c58a433d.png" alt="Windows Admin Center Setup">
<p>That's all there is to the setup. Next, we will see what it does under the hood and how to connect to and work with the Windows Admin Center.</p>]]></description>
<link>https://HarvestingClouds.com/post/windows-admin-center-in-the-azure-portal-2-setting-it-up</link>
<pubDate>Sat, 26 Feb 2022 00:00:00 +0500</pubDate>
</item>
<item>
<title>Windows Admin Center in the Azure portal - 1 Intro to the Easiest Server Management</title>
<description><![CDATA[<p>This blog is a part of the <strong>Windows Admin Center in the Azure portal</strong> series. You can find the <strong>Index</strong> of this series here: <a href="https://harvestingclouds.com/post/windows-admin-center-in-the-azure-portal-index/" target="_blank">Windows Admin Center in the Azure portal</a>.</p>
<p>Performing administrative tasks on Windows servers is one of the main activities in the day-to-day life of a system admin. If you are managing Azure infrastructure and also performing server management then you need to go to the Azure portal daily and also need to log into the VMs from time to time. What if you can perform both activities from just one area without the need to log into the VM. This is where the <strong>Windows Admin Center</strong> comes into the picture.</p>
<p><strong>Windows Admin Center</strong> is now available directly in the Azure portal for Windows server VMs. It helps you manage various aspects of the server without requiring you to log into the VM itself via RDP (remote desktop) or remote into it via PowerShell.</p>
<h2>Cost</h2>
<p>There is no cost to using the Windows Admin Center in the Azure portal.</p>
<h2>Functionality support</h2>
<p>At the time of writing of this post, the below functionality is supported:</p>
<ul>
<li>Certificates</li>
<li>Devices</li>
<li>Events</li>
<li>Files and file sharing</li>
<li>Firewall</li>
<li>Installed apps</li>
<li>Local users and groups</li>
<li>Performance Monitor</li>
<li>PowerShell</li>
<li>Processes</li>
<li>Registry</li>
<li>Remote Desktop</li>
<li>Roles and features</li>
<li>Scheduled tasks</li>
<li>Services</li>
<li>Storage</li>
<li>Updates</li>
</ul>
<h2>Pre-requisities</h2>
<p>To be able to install the Windows Admin Center in Azure, you need to ensure the following:</p>
<ul>
<li>The OS of the VM should be Windows Server 2016, Windows Server 2019, or Windows Server 2022</li>
<li>At least 3 GiB of memory</li>
<li>Be in any region of an Azure public cloud (it's not supported in Azure China, Azure Government, or other non-public clouds)</li>
</ul>
<p>There are also <strong>networking requirements</strong> to allow the traffic:</p>
<ul>
<li>Outbound internet access or an outbound port rule allowing HTTPS traffic to the Windows Admin Center service tag</li>
<li>An inbound port rule if using a public IP address to connect to the VM (not recommended)</li>
</ul>
<p>Note that the inbound rule is only required when using the Public IP address. If you are connected to the virtual network and have connectivity to the VM then the inbound rule is not required. These rules are auto-created when enabling the Windows Admin Center.</p>
<h2>Requirements for connecting VM</h2>
<p>The VM or laptop you will use to connect to the Windows Admin Center (via Azure portal), also has a couple of requirements:</p>
<ul>
<li>You should be using a modern browser. Officially supported browsers are Edge and Chrome.</li>
<li>Either connectivity via the Public IP or access to the virtual network that's connected to the VM. The latter is the recommended and more secure approach.</li>
</ul>
<p>In the next post, we will set it up and will also take a look under the hood.</p>]]></description>
<link>https://HarvestingClouds.com/post/windows-admin-center-in-the-azure-portal-1-intro-to-the-easiest-server-management</link>
<pubDate>Fri, 25 Feb 2022 00:00:00 +0500</pubDate>
</item>
<item>
<title>Protecting Recovery Services Vaults with Resource Guard and Multi-user authorizations</title>
<description><![CDATA[<p>You want to control access to Recovery Services Vault and what kind of operations are allowed on the same, you can now do so using the <strong>Resource Guard</strong>. It is another resource in Azure that you can </p>
<h2>Scenario and Permissions</h2>
<p>Let's assume that there are two people at your organization. One who is responsible for the security and another who is responsible for performing backup-related operations. The security wants to limit the backup administrator's operations. Let's classify these people into two roles:</p>
<ol>
<li><strong>Backup Admin</strong> - He is the owner of the Recovery Services Vault and needs to perform various operations.</li>
<li><strong>Security Admin</strong> - He is the gatekeeper of the critical operations that occur on the vault and controls permissions that the Backup admin needs to perform his job. </li>
</ol>
<p><strong>Permissions on the Resource Guard</strong> - The Security admin needs to be the owner of the Resource Guard. The Backup admin must not have any permission to the Resource Guard.</p>
<h2>Caveats</h2>
<p>Below are some of the caveats that you should be aware of:</p>
<ul>
<li>The Backup admin must not have Contributor permissions to the Resource Guard in any scenario. The Resource Guard must be owned by a different user than the backup admin.</li>
<li>You can place Resource Guard in a subscription or tenant different from the one containing the Recovery Services vault to provide better protection.</li>
<li>This feature is currently supported for Recovery Services vaults only and not available for Backup vaults.</li>
<li>Ensure that your subscriptions containing the Recovery Services vault as well as the Resource Guard (in different subscriptions or tenants) are registered to use Microsoft.RecoveryServices provider.</li>
</ul>
<h2>Resource Guard Creation</h2>
<p>Just search for &quot;Resource Guard&quot; in the search box on the Azure portal and navigate to the section. Click on the &quot;+ Create&quot; button to create a new Resource Guard. Fill in the name and other defaults and hit Create. That's it. </p>
<img src="/images/164755043562339fe30ffee.png" alt="Resource Guard Creation">
<p>Navigate to the resource guard resource once ready.</p>
<h2>Enabling and Disabling Operations in Resource Guard</h2>
<p>Follow the below steps to enable/disable the protected operations. The numbers correspond to the numbers in the screenshot.</p>
<ol>
<li>Navigate to your Resource Guard resource. Then navigate to the properties section.</li>
<li>Provide a description. This would appear in the vaults that are protected using this Resource Guard.</li>
<li>Enable and Disable the protected operations next as shown below.</li>
</ol>
<img src="/images/16475508836233a1a3173cd.png" alt="Properties of the Resource Guard">
<h2>Adding Backup admin as Reader on the Resource Guard</h2>
<p>To enable MUA on a vault, the backup admin of the vault must have a Reader role on the Resource Guard or subscription containing the Resource Guard. Grant the Reader role to the backup admin user from the &quot;Role assignments&quot; under the &quot;Access control (IAM)&quot; for the Resource Guard resource.</p>
<h2>Enabling Multi-User Authorization on the Recovery Services Vault</h2>
<p>To set up the Multi-User Authorization on the Recovery Services Vault, navigate to the vault and link it to the Resource Guard resource. To do this navigate to the properties of the vault and select the &quot;Update&quot; link under the Multi-User Authorization setting.</p>
<img src="/images/16475511876233a2d3d09f3.png" alt="Multi-User Authorization Setting">
<p>Next, select the check box for the &quot;Protect with Resource Guard&quot; and select the radio button for the &quot;Select Resource Guard&quot;. To select the actual resource for the Resource Guard, click on the &quot;Select Resource Guard&quot; link.</p>
<img src="/images/16475513316233a363659d3.png" alt="Multi-User Authorization Setting - Details">
<p>In the next blade, select your directory and then the resource guard resource under that from the list. Hit select and then Save in the previous blade. </p>
<img src="/images/16475513356233a367d62f1.png" alt="Selecting the Resource Guard for the Vault">
<p>This should set it up for you. Now the operations you disabled can't be performed on the recovery services vault by the backup admin.</p>
<h2>Authorize critical (protected) operations</h2>
<p>If for any reason you want to allow the critical i.e. protected operations then the recommended approach is to leverage Privileged Identity Management (PIM) and have the backup admin elevate their access from Reader to a Contributor on the Resource Guard resource. An alternative is to manually update these role assignments on the Resource Guard resource. PIM is the recommended approach as it time bounds the authorization providing just-in-time (JIT) access that is auto-revoked after allowed time.</p>
<p><strong>Official documentation link</strong>: <a href="https://docs.microsoft.com/en-us/azure/backup/multi-user-authorization" target="_blank">Multi-user authorization using Resource Guard</a></p>]]></description>
<link>https://HarvestingClouds.com/post/protecting-recovery-services-vaults-with-resource-guard-and-multi-user-authorizations</link>
<pubDate>Fri, 18 Feb 2022 00:00:00 +0500</pubDate>
</item>
<item>
<title>Mount point changes and Non-standard SWAP disk creation on Linux VMs in Azure</title>
<description><![CDATA[<p>When you try to modify the SWAP disk or work with the temporary disk and want to change the mount point and related configurations. For example, you may want to change the default mount point of &quot;/mnt&quot; to &quot;/mnt/resources&quot;. The issue we have seen is that the changes work but only till you restart. Once you restart the VM then the configurations are reverted and the issue still persists. You always want to restart the VM to verify that the configurations stay as per your requirements after the reboot.</p>
<p>When searching online the official documentation takes you here: <a href="https://docs.microsoft.com/en-us/troubleshoot/azure/virtual-machines/swap-file-not-recreated-linux-vm-restart" target="_blank">Swap file is not re-created after a Linux VM restarts</a>. Make sure you have reviewed that and have tried the steps in that link. </p>
<h2>General Relevant Linux Commands</h2>
<p>Below commands are very helpful when troubleshooting this type of issues to check various configurations:</p>
<ol>
<li><strong><em>df -Th</em></strong> - This allows you to check the drives i.e. mapped formatted and partitioned drives. This command displays information about total space and available space on a file system formatted in a table.</li>
<li><strong><em>free -h</em></strong> - This command allows you to check the memory and swap space. It shows you total, used, shared, cache, and available memory.</li>
<li><strong><em>swapon -s</em></strong> - This command is used to check the swap status.</li>
<li><strong><em>cat /etc/fstab</em></strong> - This shows you the content of the fstab in the &quot;/etc&quot; directory. This is the Linux system's filesystem table. This is a configuration table designed to ease the burden of mounting and unmounting file systems to a machine.</li>
<li><strong><em>fdisk -l</em></strong> - This is for format disk. It is a dialog-driven command in Linux used for creating and manipulating disk partition table. &quot;-l&quot; option is used to list all disk partitions.</li>
<li><strong><em>lsblk</em></strong> - This command lists information about all available or the specified block devices.</li>
</ol>
<p>Use these commands to view the current state of your VM disks and to validate the settings.</p>
<h2>The working fix</h2>
<p>If your issue still persists then try the steps below.</p>
<p><strong>Step 1</strong> - Create the directory where you want to mount the temp (or ephemeral) disk</p>
<pre><code>mkdir /mnt/resource</code></pre>
<p><strong>Step 2</strong> - Create the configuration file for custom ephemeral disk mount in the cloud configuration daemon folder.</p>
<pre><code>touch /etc/cloud/cloud.cfg.d/00-custom-ephemeral-mount.cfg</code></pre>
<p><strong>Step 3</strong> - Update the contents of the file by using the below command.</p>
<pre><code>echo "mounts:
- ["ephemeral0", "/mnt/resource"]" &gt;&gt; /etc/cloud/cloud.cfg.d/00-custom-ephemeral-mount.cfg</code></pre>
<p><strong>Step 4</strong> - Edit the /etc/waagent.conf file and change the mount point over there using &quot;string editing&quot; tool command. Waagent is the Azure Agent configuration file.</p>
<pre><code>sed -i 's/ResourceDisk.Format=n/ResourceDisk.Format=y/g' /etc/waagent.conf</code></pre>
<p><strong>Step 5</strong> - Per the application/database requirements allocate the swap size in /etc/waagent.conf file. Search and update the below line as per your requirements</p>
<pre><code>ResourceDisk.SwapSizeMB=x
#Replace x with your required Swap size in MBs e.g. ResourceDisk.SwapSizeMB= 2048MB</code></pre>
<p><strong>Step 6</strong> - Reboot the server and validate the settings.</p>
<p>This should fix your settings. Let us know if it did or not. Share other tips that helped in your scenario in the comments below.</p>]]></description>
<link>https://HarvestingClouds.com/post/mount-point-changes-and-non-standard-swap-disk-creation-on-linux-vms-in-azure</link>
<pubDate>Sat, 05 Feb 2022 00:00:00 +0500</pubDate>
</item>
<item>
<title>Finding certified VM SKUs in Microsoft Azure for SAP deployments</title>
<description><![CDATA[<p>When dealing with the SAP deployments on Azure infrastructure you want to ensure that you select the VM SKUs that are certified by SAP. This ensures that your deployment is supported by SAP when you need it. </p>
<p>The <strong>Certified and Supported SAP HANA® Hardware</strong> are documented here: <a href="https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;range%23c:memorySize%23v:184cd506-70e0-43a7-b75b-67803ce28b06%23v:ms10" target="_blank"><a href="https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;range%23c:memorySize%23v:184cd506-70e0-43a7-b75b-67803ce28b06%23v:ms10">https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;range%23c:memorySize%23v:184cd506-70e0-43a7-b75b-67803ce28b06%23v:ms10</a></a></p>
<p>Note that the filters in the above link are set to memory size between <strong>2800 GiB-4 TiB</strong> and the vendor is set to Microsoft Azure. You can alter the filters as per your requirements. </p>
<p>Microsoft has also documented <strong>SAP certifications and configurations running on Microsoft Azure</strong> here: <a href="https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-certifications" target="_blank"><a href="https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-certifications">https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-certifications</a></a></p>
<p>To look for the availability by the region of the SKUs for HANA Large Instances you can find those here: <a href="https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/hana-available-skus" target="_blank"><a href="https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/hana-available-skus">https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/hana-available-skus</a></a></p>
<p>And finally, the SAP Note for the certified general VM SKUs can be found in note number <strong>1928533</strong> - SAP Applications on Azure: Supported Products and Azure VM types - SAP ONE Support Launchpad. The direct link to this SAP note can be found here: <a href="https://launchpad.support.sap.com/#/notes/1928533" target="_blank"><a href="https://launchpad.support.sap.com/#/notes/1928533">https://launchpad.support.sap.com/#/notes/1928533</a></a>. Note that you will need an SAP ID to be able to view the SAP Note.</p>
<p>Hopefully, this information help you with the planning of the infrastructure in Azure for SAP deployments. </p>]]></description>
<link>https://HarvestingClouds.com/post/finding-certified-vm-skus-in-microsoft-azure-for-sap-deployments</link>
<pubDate>Tue, 25 Jan 2022 00:00:00 +0500</pubDate>
</item>
<item>
<title>Enhanced Monitoring Agent for SAP workloads in Azure - Fix for a common issue</title>
<description><![CDATA[<p>If you have followed our previous blog to validate and troubleshoot the SAP enhanced monitoring extension in Azure and are facing an issue where the extension gets installed and shows as provisioning succeeded but still doesn't integrate with the SAP system. When you run <em>dump ccm</em> for the SAP OS Collector you get the status as &quot;False&quot; for the &quot;Enhanced Monitoring Access&quot;. </p>
<p>If you want to review the previous troubleshooting and validation blog you can visit the same here: <a href="https://harvestingclouds.com/post/troubleshooting-validating-enhanced-monitoring-agent-for-sap-workloads-in-azure/" target="_blank">Troubleshooting &amp; Validating - Enhanced Monitoring Agent for SAP workloads in Azure</a></p>
<h2>Easy fix</h2>
<p>The easy fix for this is to <strong>restart the SAP Host Agent</strong>.  Just run the below commands (as a user with root authorization) to restart the <strong><em>saphostexec</em></strong> process:</p>
<pre><code>./saphostexec -restart </code></pre>
<p>Now validate the integration status again by running the below commands:</p>
<pre><code>cd /usr/sap/hostctrl/exe
./saposcol -d
dump ccm</code></pre>
<p>The status should now be set to True for the &quot;Enhanced Monitoring Access&quot;. </p>
<h2>Problem with the Easy Fix</h2>
<p>The problem that you may encounter is that when you reboot the VM, there are chances that the status can again switch back to False. The potential reason for this is that the SAP OS Collector becomes ready first before the Azure Enhanced Monitoring agent becomes ready. This causes the issue as the SAP OS Collector reports that the extension is not available even though it is installed. This is why restarting the host agent fixes the issue temporarily. </p>
<h2>Permanent Resolution</h2>
<p>The recommendation is to increase the Azure Extension wait time to <strong>6 minutes</strong>. To set the wait time to 6 minutes, the below environment is needed to be set.</p>
<pre><code>SAPOSCOL_AZURE_EXTENSION_WAITTIME=360</code></pre>
<p>Run the below command to set the environment variable:</p>
<pre><code>echo "export SAPOSCOL_AZURE_EXTENSION_WAITTIME=360" &gt;&gt; /etc/profile</code></pre>
<p>This would update the system-wide profile.</p>
<p>Let us know if this fixed your issue. If you found alternate fixes, please post those in the comments below.</p>]]></description>
<link>https://HarvestingClouds.com/post/enhanced-monitoring-agent-for-sap-workloads-in-azure-fix-for-a-common-issue</link>
<pubDate>Thu, 20 Jan 2022 00:00:00 +0500</pubDate>
</item>
<item>
<title>Troubleshooting & Validating - Enhanced Monitoring Agent for SAP workloads in Azure</title>
<description><![CDATA[<p>In the previous blog post, we saw how to install the enhanced monitoring extension for SAP workloads in Azure. We looked at various caveats linked to the installation and pre-requisites etc. You can view that post here: <a href="https://harvestingclouds.com/post/installing-enhanced-monitoring-agent-for-sap-workloads-in-azure/" target="_blank">Installing Enhanced Monitoring Agent for SAP workloads in Azure</a>. In this post, we are going to troubleshoot the extension-related common issues for SAP workloads in Azure.</p>
<h2>1. Checking the pre-requisites</h2>
<p>As discussed in the previous post, ensure that the below pre-requisites are in place before you start:</p>
<ul>
<li>This extension needs internet access to the URL &quot;management.azure.com&quot;. If this is not set up or you have locked the access, please ensure that you open this access in your environment. </li>
<li>If you have already had an older version of the extension already installed, make sure to uninstall the older VM extension before switching between the standard and the new version of the Azure Extension for SAP.</li>
<li>Make sure you are using SAP Host Agent 7.21 PL 47 or higher.</li>
<li>You have SUSE SLES 15 or higher. Or you have Red Hat Enterprise Linux 8.1 or higher.</li>
<li>You are using Azure Ultra Disk or Standard Managed Disks for your SAP workloads</li>
</ul>
<h2>2. Validating the extension is installed successfully or not</h2>
<p>Run the below command to search for the enhanced monitoring extension-related files. </p>
<pre><code>ls -al /var/lib/waagent/ | grep AzureCAT</code></pre>
<p>There should be 3 processing that should be installed:</p>
<ul>
<li>Microsoft.AzureCAT.AzureEnahncedMonitoring.MonitorX64Linux_xxxxxx.zip</li>
<li>Microsoft.AzureCAT.AzureEnahncedMonitoring.MonitorX64Linux.xx.manifest.xml</li>
<li>Microsoft.AzureCAT.AzureEnahncedMonitoring.MonitorX64Linux-version</li>
</ul>
<p>The output should look something like the below:</p>
<img src="/images/16474607316232417b546d3.png" alt="Azure Enahnced Monitoring Files">
<h2>3. Validating the extension process is up and running or not</h2>
<p>Command (requires sudo):</p>
<pre><code>ps -ax | grep AzureCAT</code></pre>
<p>The output should have an entry for the relevant process running as shown below:</p>
<img src="/images/16474609136232423169a20.png" alt="Azure CAT Process">
<h2>4. Check Enhanced Monitoring status in SAP OS Collector</h2>
<p>Run the below commands (may required sudo access):</p>
<pre><code>cd /usr/sap/hostctrl/exe
./saposcol -d
dump ccm</code></pre>
<p>Scroll down and check the output:
<strong>Required result</strong>: The Enhanced Monitoring Access should have TRUE in front of it and the output will contain various metrics. If not then keep following the post for troubleshooting tips.</p>
<img src="/images/1647461596623244dc5deb0.png" alt="Enhanced Monitoring Access - Correct Configs">
<h2>5. Validating the Extension is sending metrics</h2>
<p>Validate the extension by running the below command from inside the SAP VM:</p>
<pre><code>curl http://127.0.0.1:11812/azure4sap/metrics</code></pre>
<p>This should output various metrics. If it doesn't then the extension is not working and you need to troubleshoot further.</p>
<h2>6. Validating and Fixing the Extension</h2>
<p>This step resolves most of the issues. Test the extension by just running the below cmdlet.</p>
<pre><code>Test-AzVMAEMExtension -ResourceGroupName "Your-Resource-Group-Name" -VMName "Your-VM-Name"</code></pre>
<p>The PowerShell cmdlet will deploy/redeploy the extension and fix any configuration issues. You might have to wait up to one hour until all counters start showing up.</p>
<p><strong>Reboot</strong> the VM once and validate the extension again using the previous steps.</p>
<h2>7. Still Having issues</h2>
<p>If you are still having issues, then try this subsequent blog for a fix for a common issue: <a href="https://harvestingclouds.com/post/enhanced-monitoring-agent-for-sap-workloads-in-azure-fix-for-a-common-issue/" target="_blank">Enhanced Monitoring Agent for SAP workloads in Azure - Fix for a common issue</a></p>
<h2>References</h2>
<p>Troubleshooting SAP note: 2191498 - SAP on Linux with Azure: Enhanced Monitoring - SAP ONE Support Launchpad.</p>
<p>Link: <a href="https://launchpad.support.sap.com/#/notes/2191498" target="_blank"><a href="https://launchpad.support.sap.com/#/notes/2191498">https://launchpad.support.sap.com/#/notes/2191498</a></a></p>]]></description>
<link>https://HarvestingClouds.com/post/troubleshooting-validating-enhanced-monitoring-agent-for-sap-workloads-in-azure</link>
<pubDate>Sun, 16 Jan 2022 00:00:00 +0500</pubDate>
</item>
<item>
<title>Installing Enhanced Monitoring Agent for Multiple SAP workloads in Azure - Code Sample</title>
<description><![CDATA[<p>In the previous blog post, we saw how to install the enhanced monitoring extension for SAP workloads in Azure. We looked at various caveats linked to the installation and pre-requisites etc. You can view that post here: <a href="https://harvestingclouds.com/post/installing-enhanced-monitoring-agent-for-sap-workloads-in-azure/" target="_blank">Installing Enhanced Monitoring Agent for SAP workloads in Azure</a>. In this post, I want to share the script sample for installing the extension for multiple VMs in a single go.</p>
<h2>Latest Sample Location on GitHub</h2>
<p>You can find the latest script sample in my GitHub repository here: <a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Install-SAPMonitoringExtensionInLoop" target="_blank"><a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Install-SAPMonitoringExtensionInLoop">https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Install-SAPMonitoringExtensionInLoop</a></a></p>
<p>The raw version can be found directly here: <a href="https://raw.githubusercontent.com/HarvestingClouds/PowerShellSamples/master/Scripts/Install-SAPMonitoringExtensionInLoop/Install-SAPMonitoringExtensionInLoop.ps1" target="_blank"><a href="https://raw.githubusercontent.com/HarvestingClouds/PowerShellSamples/master/Scripts/Install-SAPMonitoringExtensionInLoop/Install-SAPMonitoringExtensionInLoop.ps1">https://raw.githubusercontent.com/HarvestingClouds/PowerShellSamples/master/Scripts/Install-SAPMonitoringExtensionInLoop/Install-SAPMonitoringExtensionInLoop.ps1</a></a></p>
<h2>The script sample and details</h2>
<p>Just like for a single VM, the script uses the &quot;<strong><em>Set-AzVMAEMExtension</em></strong>&quot; cmdlet to install the extension with the &quot;<strong><em>InstallNewExtension</em></strong>&quot; switch. In this scenario, we pull all VMs from a single or multiple resource groups. The resource group is selected either based on the wildcard match or you can specify a particular resource group name as well. Each VM is then checked if this is Linux or Windows. Make sure that all the VMs have SAP workloads before proceeding with the installation. You can also uninstall the extension if it is not aplicable on any of the VM later.</p>
<p>The script sample is also provided below for your easy reference.</p>
<pre><code>&lt;#  
 .NOTES
    ==============================================================================================
    File:       Install-MonitoringExtensionInLoop.ps1

    Purpose:    To Install Monitoring Extension In Loop

    Version:    1.0.0.0 

    Author:     Aman Sharma
    ==============================================================================================
 .SYNOPSIS
    Installs monitoring extension in loop on Linux VMs

 .DESCRIPTION
    This script is used to Installs monitoring extension in loop on Linux VMs

 .EXAMPLE
    C:\PS&gt;  .\Install-MonitoringExtensionInLoop.ps1

    Description
    -----------
    This command executes the script with default parameters.

 .INPUTS
    None.

 .OUTPUTS
    None.

 .LINK
    https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/deployment-guide#2ad55a0d-9937-4943-9dd2-69bc2b5d3de0
#&gt;

#Inputs
$subscriptionName = "you-subscription-name"

#Adding Azure Account and Subscription
$env = Get-AzEnvironment -Name "AzureCloud"
Connect-AzAccount -Environment $env
Set-AzContext -SubscriptionName $subscriptionName

#Selecting all RGs that begins with the text. Notice the wildcard in the name
#TODO: Update this to a specific resource group or a similar query with wildcards
$allSapRGs = Get-AzResourceGroup -Name "RG-IT-*"

foreach($currentRG in $allSapRGs)
{
    #Fetch all resources in the RG
    $currentRGName = $currentRG.ResourceGroupName
    $VMs = Get-AzVM -ResourceGroupName $currentRGName

    #Iterating on the VMs
    foreach ($vm in $VMs) 
    {
        $VMName = $vm.name
        $osType = $vm.StorageProfile.OsDisk.OsType
        Write-Host "Working on VM: $VMName"

        if ($osType -eq "Linux") {
            Write-Host "VM $VMName is a Linux VM. Proceeding with the installation."
            try {
                Set-AzVMAEMExtension -ResourceGroupName $currentRGName -VMName $VMName -InstallNewExtension

                Write-Host -ForegroundColor Green "Installed the extension on the VM $VMName"
            }
            catch {
                Write-Host -ForegroundColor Red "Error while installing extension."
                $Error[0]
                Write-Host -ForegroundColor Red "Error occured at:"
                $Error[0].InvocationInfo.PositionMessage
            }
        }
        else {
            Write-Host "VM $VMName is not a Linux VM"
        }

    }
}</code></pre>]]></description>
<link>https://HarvestingClouds.com/post/installing-enhanced-monitoring-agent-for-multiple-sap-workloads-in-azure-code-sample</link>
<pubDate>Fri, 14 Jan 2022 00:00:00 +0500</pubDate>
</item>
<item>
<title>Enhanced Monitoring Agent for SAP workloads in Azure - Details and Installation </title>
<description><![CDATA[<p>For any SAP deployment in the Microsoft Azure, you want to deploy the enhanced Azure Extension for SAP, which is available in the Azure Extension Repository in the global Azure datacenters. The new extension was built to be able to monitor all Azure resources of a virtual machine. It supports additional storage options, for example, Standard Disks and operating systems.</p>
<h2>Pre-requisites</h2>
<p>The pre-requisites are as follows:</p>
<ul>
<li>This extension needs internet access to the URL &quot;management.azure.com&quot;. If this is not set up or you have locked the access, please ensure that you open this access in your environment. </li>
<li>If you have already had an older version of the extension already installed, make sure to uninstall the older VM extension before switching between the standard and the new version of the Azure Extension for SAP.</li>
<li>Make sure you are using SAP Host Agent 7.21 PL 47 or higher.</li>
<li>You have SUSE SLES 15 or higher. Or you have Red Hat Enterprise Linux 8.1 or higher.</li>
<li>You are using Azure Ultra Disk or Standard Managed Disks for your SAP workloads</li>
</ul>
<h2>Installing on a single VM</h2>
<p>Before proceeding, you want to ensure that you have the Azure PowerShell module installed. You can do so by simply running the below cmdlet. You can learn more here: <a href="https://docs.microsoft.com/en-us/powershell/azure/install-az-ps?view=azps-7.3.2" target="_blank">Install the Azure Az PowerShell module</a>.</p>
<pre><code>Install-Module -Name Az -Scope CurrentUser -Repository PSGallery -Force</code></pre>
<p>Installing on a single VM is as easy as running the &quot;<strong><em>Set-AzVMAEMExtension</em></strong>&quot; PowerShell cmdlet with the &quot;<strong><em>InstallNewExtension</em></strong>&quot; switch. Run the below script to install it on a specific VM.</p>
<pre><code>#Set the Azure context
$env = Get-AzEnvironment -Name &lt;name of the environment&gt;
Connect-AzAccount -Environment $env
Set-AzContext -SubscriptionName &lt;subscription name&gt;

#Install the Azure Enhanced Monitoring Extension
Set-AzVMAEMExtension -ResourceGroupName &lt;resource group name&gt; -VMName &lt;virtual machine name&gt; -InstallNewExtension</code></pre>
<h2>Installing on multiple VMs</h2>
<p>You can loop on the above script and can install the Monitoring extension on multiple VMs. I will also provide a code sample to do this easily and will post a blog about the same. You can view that post and the sample code here: <a href="https://harvestingclouds.com/post/installing-enhanced-monitoring-agent-for-multiple-sap-workloads-in-azure-code-sample/" target="_blank">Installing Enhanced Monitoring Agent for Multiple SAP workloads in Azure - Code Sample</a>.</p>
<h2>Testing the extension</h2>
<p>You can test the extension by running the below PowerShell cmdlet.</p>
<pre><code>Test-AzVMAEMExtension -ResourceGroupName "Your-Resource-Group-Name" -VMName "Your-VM-Name"</code></pre>
<h2>Validating and Troubleshooting the extension</h2>
<p>You can validate the extension and troubleshoot it in this subsequent blog in detail: <a href="https://harvestingclouds.com/post/troubleshooting-validating-enhanced-monitoring-agent-for-sap-workloads-in-azure/" target="_blank">Troubleshooting &amp; Validating - Enhanced Monitoring Agent for SAP workloads in Azure</a>.</p>
<h2>Supportability for this Extension</h2>
<p>Support for the Azure Extension for SAP is provided through SAP support channels. If you need assistance with the Azure VM extension for SAP solutions, please open a support case with SAP Support. Microsoft support cases are generally closed for this reason. We also look at troubleshooting this extension in an upcoming blog.</p>
<h2>Details on the Metrics provided</h2>
<p>The detail of the metrics provided is documented in the SAP Note 2178632. You can find this note located here: <a href="https://launchpad.support.sap.com/#/notes/2178632" target="_blank">2178632 - Key Monitoring Metrics for SAP on Microsoft Azure - SAP ONE Support Launchpad</a>.</p>
<p><strong>Other References</strong>: </p>
<ul>
<li><a href="https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/vm-extension-for-sap-new" target="_blank">New Version of Azure VM extension for SAP solutions</a></li>
<li><a href="https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/vm-extension-for-sap" target="_blank">Implement the Azure VM extension for SAP solutions</a></li>
</ul>]]></description>
<link>https://HarvestingClouds.com/post/enhanced-monitoring-agent-for-sap-workloads-in-azure-details-and-installation</link>
<pubDate>Wed, 12 Jan 2022 00:00:00 +0500</pubDate>
</item>
<item>
<title>Grant access to Azure Storage Accounts for Azure resource instances</title>
<description><![CDATA[<p>You can now grant access to resource instances to Azure Storage Accounts. This further secures and restricts storage account access only to your application's Azure resources. </p>
<p>Resource instances must be from the same tenant as your storage account, but they can belong to any subscription in the tenant. Internally it uses system-assigned managed identity for the selected Azure resources.</p>
<h2>Granting access</h2>
<p>Follow the below steps to set up access to Resource Instances:</p>
<ul>
<li><strong>Step 1</strong> - You can start by navigating to your Storage Account and then navigating to the <strong>Networking</strong> under &quot;<strong>Security + networking</strong>&quot;</li>
<li><strong>Step 2</strong> - Scroll down to the section named &quot;<em>Resource instances</em>&quot;</li>
<li><strong>Step 3</strong> - Select the <strong>Resource Type</strong> as per your requirements</li>
<li><strong>Step 4</strong> - Select the <strong>instance name</strong>.</li>
</ul>
<img src="/images/164747167362326c3974951.png" alt="Settings">
<p>You have various options for the <strong>instance name</strong> selection: </p>
<ol>
<li>You can select an individual instance name.</li>
<li>You can also select &quot;<strong>All in current tenant</strong>&quot; to select all resources of that type in the current tenant's all subscriptions.</li>
<li>You can also select &quot;<strong>All in current subscription</strong>&quot; to select all resources of that type in the current subscription.</li>
<li>You can also select &quot;<strong>All in current resource group</strong>&quot; to select all resources of that type in the current resource group.</li>
</ol>
<h2>Controlling allowed operations</h2>
<p>The types of operations that a resource instance can perform on storage account data are determined by the Azure role assignments of the resource instance. You can assign these using the system-assigned managed identity of the resource on the Storage Account's RBAC settings.</p>
<p><strong>Reference</strong>: You can read more on the official documentation here: <a href="https://docs.microsoft.com/en-us/azure/storage/common/storage-network-security?tabs=azure-portal#grant-access-from-azure-resource-instances-preview" target="_blank">Grant access from Azure resource instances</a></p>]]></description>
<link>https://HarvestingClouds.com/post/grant-access-to-azure-storage-accounts-for-azure-resource-instances</link>
<pubDate>Wed, 05 Jan 2022 00:00:00 +0500</pubDate>
</item>
<item>
<title>Enabling Storage Account access to virtual networks in other regions</title>
<description><![CDATA[<p>Within Azure Storage Accounts, the Service endpoints work with the virtual networks in the same region or the paired regions. You can extend this functionality to virtual networks in other regions by enabling the AllowGlobalTagsForStorage feature for your subscription.  </p>
<h2>Enabling the feature</h2>
<p>You can enable the feature by running the below script. This essentially uses the &quot;<strong><em>Register-AzProviderFeature</em></strong>&quot; cmdlet to register the feature named &quot;<strong><em>AllowGlobalTagsForStorage</em></strong>&quot;.</p>
<pre><code>$subscriptionName = "Your-Subscription-Name"

#Adding Azure Account and Subscription
$env = Get-AzEnvironment -Name "AzureCloud"
Connect-AzAccount -Environment $env
Set-AzContext -SubscriptionName $subscriptionName

#Registering or Enabling the feature
Register-AzProviderFeature -ProviderNamespace Microsoft.Network -FeatureName AllowGlobalTagsForStorage</code></pre>
<h2>Caveats</h2>
<p>If you registered the <strong>AllowGlobalTagsForStorage</strong> feature, and you want to enable access to your storage account from a virtual network/subnet in another Azure AD tenant, or in a region other than the region of the storage account or its paired region, then you must use PowerShell or the Azure CLI. The Azure portal does not show subnets in other Azure AD tenants or in regions other than the region of the storage account or its paired region, and hence cannot be used to configure access rules for virtual networks in other regions.</p>
<p>Reference: <a href="https://docs.microsoft.com/en-us/azure/storage/common/storage-network-security?tabs=azure-powershell" target="_blank">Configure Azure Storage firewalls and virtual networks</a></p>]]></description>
<link>https://HarvestingClouds.com/post/enabling-storage-account-access-to-virtual-networks-in-other-regions</link>
<pubDate>Mon, 03 Jan 2022 00:00:00 +0500</pubDate>
</item>
<item>
<title>Quick Tip - Finding SLA of any Azure service</title>
<description><![CDATA[<p>If you are architecting a solution around Azure services, you will need to find the Service-level agreements (SLA) for the related Azure services. This is important to be factored into your solution designs as it will impact the availability and uptime of the service in your solution. </p>
<h2>Finding the SLA of Azure Services</h2>
<p>To find the Service-level agreements (SLAs) for any Azure service navigate to this link: <a href="https://azure.microsoft.com/en-us/support/legal/sla/" target="_blank">Service-level agreements for Azure services</a>. </p>
<p>Here you can search for any Azure service and find the SLA for the same. E.g. to find the SLA for Azure Site Recovery (ASR) navigate to the Management category and then click on the &quot;Azure Site Recovery&quot; option. </p>
<img src="/images/16474896396232b267a380a.png" alt="Finding SLAs">
<p>This will take you to the SLA details link for the ASR.</p>
<h2>Providing an internal SLA for the services</h2>
<p>Note that in addition to Microsoft's SLA your internal department may need to follow some processes related to the services that will reduce the SLA further. Ensure that you factor these external factors into your solution and its related SLAs. E.g. for Azure Site Recovery (ASR) the SLA from Microsoft is very less, but your internal department may need to take additional steps post and pre-failover via ASR. You will need to factor these into your internal department's SLA for the service.</p>
<p>A general rule of thumb is to simulate failure and check how much time it takes for your solution or services to recover from the failure. Add some reasonable (and proportional) buffer to the observed times to formulate the internal SLAs. Ensure that any external factors, human errors, or one-time issues are not affecting these numbers during such exercises.</p>
<p>Hopefully, this helps you in better designing solutions in the cloud. Let us know your tips related to SLAs in the comments below.</p>]]></description>
<link>https://HarvestingClouds.com/post/quick-tip-finding-sla-of-any-azure-service</link>
<pubDate>Wed, 29 Dec 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Quick Tip - Understanding uptime and downtime linked to any Cloud SLA numbers</title>
<description><![CDATA[<p>When you are in any conversation related to the Cloud SLAs you will hear numbers like the SLA is 99.9% or 99.99% or more. What are those numbers and how they translate into uptime and downtime in a year? Let's take a closer look to understand these details.</p>
<h2>99.9% uptime SLA</h2>
<p>Let's start with 99.9% SLA i.e. <strong>three nines</strong>. SLA level of 99.9% uptime/availability means that you can have up to the following periods of allowed downtime or unavailability.</p>
<ul>
<li>Daily: 1m 26s</li>
<li>Weekly: 10m 4s</li>
<li>Monthly: 43m 49s</li>
<li>Quarterly: 2h 11m 29s</li>
<li>Yearly: 8h 45m 56s</li>
</ul>
<h2>99.99% uptime SLA</h2>
<p>SLA level of 99.99% i.e. <strong>four nines</strong> uptime/availability means that you can have up to the following periods of allowed downtime or unavailability.</p>
<ul>
<li>Daily: 8s</li>
<li>Weekly: 1m 0s</li>
<li>Monthly: 4m 22s</li>
<li>Quarterly: 13m 8s</li>
<li>Yearly: 52m 35s</li>
</ul>
<h2>Find uptime and downtime for any SLA</h2>
<p>Use various online SLA calculators to determine the numbers easily for yourself. One such calculator is provided below (just as a reference):
<a href="https://uptime.is/" target="_blank"><a href="https://uptime.is/">https://uptime.is/</a></a></p>
<p>This is very simple tip, but helps you in any cloud related SLA conversation to have better understanding of the related numbers.</p>]]></description>
<link>https://HarvestingClouds.com/post/quick-tip-understanding-uptime-and-downtime-linked-to-any-cloud-sla-numbers</link>
<pubDate>Tue, 28 Dec 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Upgrade to Azure Storage account with Data Lake Gen2 easily with new feature</title>
<description><![CDATA[<p>Now there is in-built capability in the Azure Storage Accounts to upgrade to the Data Lake Gen2. You no longer need to copy over the data. Note that this option is available only for the &quot;<strong><em>StorageV2 (general purpose v2)</em></strong>&quot; storage account types. It is not available for the v1 storage accounts.</p>
<h2>Upgrade Experience</h2>
<p>To find the upgrade option, simply navigate to your storage account and then navigate to the Settings section. You will see an option to perform the &quot;<strong><em>Data Lake Gen2 upgrade</em></strong>&quot;. It gives you a step-by-step experience with this upgrade process.</p>
<img src="/images/16475365296233699135ba3.png" alt="Finding the upgrade option">
<h2>More Details</h2>
<p>The three steps for performing the upgrade are as below.</p>
<ul>
<li><strong>Step 1</strong>: Review account changes before upgrading - here you review and agree to the changes being performed as part of the upgrade.</li>
<li><strong>Step 2</strong>: Validate account before upgrading - checking for any unsupported features and providing a report.</li>
<li><strong>Step 3</strong>: Upgrade account - performing the actual upgrade.</li>
</ul>
<img src="/images/16475365366233699807e35.png" alt="Upgrade option details">
<p>During step 3, you are presented with all the unsupported configurations. These features are also checked during step 2 i.e. the validation step and a report is provided before the upgrade is performed. E.g. If you have versioning enabled then that will be found in the Validation step. You will have to navigate to the Data Protection and disable the same. After disabling you will be able to run through the wizard again. </p>
<img src="/images/164753706562336ba98f6f1.png" alt="Unsupported changes">
<p>If there are still errors, then an &quot;<strong>error.json</strong>&quot; file is created with blob-level error details. This is created in a container named &quot;<strong>hnsonerror</strong>&quot;. E.g. If a Blob has auto-snapshot, or if it has an active lease, then these will be tracked in this error.json file. Rectify these errors at the blob level and then retry the experience.</p>
<p><strong>Reference</strong>: <a href="https://docs.microsoft.com/en-us/azure/storage/blobs/upgrade-to-data-lake-storage-gen2-how-to?tabs=azure-portal" target="_blank">Upgrade Azure Blob Storage with Azure Data Lake Storage Gen2 capabilities</a></p>]]></description>
<link>https://HarvestingClouds.com/post/upgrade-to-azure-storage-account-with-data-lake-gen2-easily-with-new-feature</link>
<pubDate>Tue, 21 Dec 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Finding the optimal Availability Zone for your Microsoft Azure subscription</title>
<description><![CDATA[<p>When you start designing your solutions in Microsoft Azure for high availability, you think of <strong>Availability Zones</strong>. Within a region, these are physically separate locations that are tolerant to local failures. For your business-critical workloads, availability zones help you achieve resiliency and reliability. </p>
<h2>Need for identifying the right numbers</h2>
<p>The biggest caveat with Availability Zones is that they don't correspond to a specific location within a region. These are enumerated when you create a subscription and are different for each subscription (even within the same tenant). Even within a subscription, these are different for each region. </p>
<p>Let's look at an example. Let's assume you have two subscriptions within the &quot;East US&quot; location. One for Dev and another for Prod. Generally, you will see three availability zones, numbered 1, 2, and 3. The location that is referred to as number 1, may or may not be the same location that is referred to as number 1 in the Prod subscription.</p>
<p>Therefore it is important that for every subscription and for every region you find the optimal zones. </p>
<h2>How to define optimal Availability Zones</h2>
<p>The most optimal Availability Zone is the one that is the best in the below 2 factors:</p>
<ol>
<li><strong>Latency</strong> - It should have the lowest latency</li>
<li><strong>Bandwidth</strong> - It should have the highest latency</li>
</ol>
<p>Not that these are important when deploying the solution in highly available designs. Another concept related to HA design is identifying the primary and the secondary zones.</p>
<ul>
<li>Selecting <strong>primary</strong> Availability Zone - Not that the one with the lowest latency and highest bandwidth should become your primary Availability Zone.</li>
<li>Selection <strong>secondary</strong> Availability Zone - The next best availability zone becomes the secondary zone.</li>
</ul>
<h2>Determining the Optimal Availability Zones</h2>
<p>To determine the Optimal Availability Zones, you want to deploy 3 VMs in your subscription. Each VM should be deployed in a different Availability Zone. Then run the Latency and Bandwidth tests between each pair of VMs. Also, ensure to run the tests in both directions. E.g. from VM1 to VM2 and then from VM2 to VM1. </p>
<h2>Automated Script</h2>
<p>The whole process for finding the optimal availability zones has been automated. This is now as easy as running the script from the below source:</p>
<ul>
<li><a href="https://github.com/Azure/SAP-on-Azure-Scripts-and-Utilities/tree/main/AvZone-Latency-Test" target="_blank">Availability Zone Latency Test Script</a></li>
</ul>
<p>The script should be run from a VM within the Virtual Network of the subscription (for which you are trying to identify the optimal zones). This script automatically creates 3 VMs in 3 different availability zones and then runs the latency and bandwidth tests. In the end, it deletes the VMs and also provides you with a detailed report.</p>
<p>The report looks something like this: </p>
<img src="/images/16477799956237209b34a38.png" alt="Test Report">
<p>The top table shows you <strong>latency</strong> in micro seconds. Lower the number the better it is. As you can see zone 3 to zone 1 is the lowest latency of 56.1 micro seconds. <strong>Bandwith</strong> is shown in the bottom table and is measured in MBs transferred per second. Higher the number, the better it is. As you can see, zone 1 to zone 3, zone 3 to zone 2, show the highest number of 478 MB/sec. If you take both latency and bandwidth into consideration then zone 1 and zone 3 are the best combinations for this subscription. </p>
<p>Also, this test should be run at least 3 times and at different times of the day to get more in-depth and accurate results.</p>
<p>Changing an availability zone later is a tedious task. Hopefully, this post helps you determine the right availability zones from the start and lay a good foundation for your environment.</p>]]></description>
<link>https://HarvestingClouds.com/post/finding-the-optimal-availability-zone-for-your-microsoft-azure-subscription</link>
<pubDate>Sun, 12 Dec 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Automate locking of all resources of a particular type - Code Sample</title>
<description><![CDATA[<p>Locking is a concept that is available on all Azure resources. Locks are a very important but very less known feature in Azure. This prevents unintended operation on a particular resource. </p>
<p><strong>NOTE</strong>: We visited this concept in the past. Here we want to re-iterate this for our readers as I have observed many issues at various customers that could have been prevented if they were leveraging locks for all their critical resources.</p>
<p>You have two types of locks in Azure:</p>
<ol>
<li><strong>ReadOnly</strong> - You won't' be able to alter any configuration of the resource</li>
<li><strong>DoNotDelete</strong> - You will be able to add configurations but will not be able to remove configurations or even delete the resource</li>
</ol>
<p><strong>Do Not Delete</strong> is the lock, that as a best practice, you should apply on all critical resources in the environment. Once this lock is there on the resources, even the global administrator will not be able to delete the resources. The only way to delete the resources will be to delete the lock first and then delete the resources. </p>
<h2>The Script Sample Details</h2>
<p>This script sample leverages this concept of locks and uses the below cmdlet to apply the locks on various critical resources in the environment. </p>
<pre><code>New-AzureRmResourceLock -LockLevel CanNotDelete -LockName DoNotDelete -ResourceName $vNet.Name -ResourceType $vNet.Type -ResourceGroupName $vNet.ResourceGroupName -LockNotes "Do Not Delete Lock" -Confirm -Force</code></pre>
<p>The above command uses <strong><em>New-AzureRmResourceLock</em></strong> cmdlet to create the Do Not Delete lock on a Virtual Network. </p>
<p>This script iterates through all subscriptions that your account has access to and then applies the lock to all resources of type:</p>
<ol>
<li>Virtual Network</li>
<li>Route Tables</li>
<li>Express Routes</li>
<li>Virtual Network Gateways</li>
<li>Virtual Network Gateway Connections</li>
<li>Recovery Services Vaults (i.e. ASR Vaults)</li>
</ol>
<p>You can add more resource types that you deem as critical in your environment. </p>
<h3>Location of the Script</h3>
<p>You can find this script in GitHub at this location: <a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Apply-LocksOnVariousAzureResources" target="_blank">Apply-LocksOnVariousAzureResources.ps1</a></p>]]></description>
<link>https://HarvestingClouds.com/post/automate-locking-of-all-resources-of-a-particular-type-code-sample</link>
<pubDate>Sun, 05 Dec 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Automating the enabling of BYOS licensing for all SUSE VMs in Azure - Code Sample</title>
<description><![CDATA[<p>In the previous post we looked at the difference between two deployment models to choose from when deploying SUSE Linux Enterprise VMs (SLES):</p>
<ul>
<li>PAYG - Pay as you go</li>
<li>BYOS - Bring your own subscription</li>
</ul>
<p>We also looked at the advantages of the BYOS over the PAYG model. You can review the previous post here: <a href="https://harvestingclouds.com/post/switching-suse-vms-from-payg-to-byos-model/" target="_blank">Switching SUSE VMs from PAYG to BYOS model</a>.</p>
<p>In this post, we look at automating the process via a complete code sample.</p>
<h2>Script Location</h2>
<p>The complete script can be found on the GitHub here:
<a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Enable-SUSEBYOSLicense" target="_blank">Enable-SUSEBYOSLicense</a></p>
<h2>Script Workings</h2>
<p>The script works by setting the license type property of the VM to &quot;<strong>SLES_BYOS</strong>&quot;.</p>
<pre><code>$vm.LicenseType = "SLES_BYOS"</code></pre>
<p>The script then leverages the &quot;<em>Update-AzVM</em>&quot; cmdlet to update the VM with the license property.</p>
<pre><code>Update-AzVM -VM $vm -ResourceGroupName $currentRGName</code></pre>
<p>Make sure you update the input subscription name and also the Resource Group name. In the script, it uses Wildcard to find all resource groups that start with the string mentioned before &quot;*&quot;. You can use a similar wildcard match for the string or you can provide a specific Resource Group name.</p>
<p>Let me know if this helped you in any way in the comments below or at GitHub.</p>]]></description>
<link>https://HarvestingClouds.com/post/automating-the-enabling-of-byos-licensing-for-all-suse-vms-in-azure-code-sample</link>
<pubDate>Fri, 03 Dec 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Switching SUSE VMs from PAYG to BYOS model</title>
<description><![CDATA[<p>In Azure, you have two deployment models to choose from when deploying SUSE Linux Enterprise VMs (SLES):</p>
<ul>
<li>PAYG - Pay as you go</li>
<li>BYOS - Bring your own subscription</li>
</ul>
<p>If you selected Pay as you go in the beginning and want to switch to bring your own subscription (i.e. BYOS) model and do not want to do this per VM manually, then you can leverage the code sample from this post to automate the process easily.</p>
<p><strong>Cost difference</strong>: We did the math and found that BYOS works out to be cheaper especially if you also are going to deploy SUSE Manager to manage your VMs in the environment. Let us know if you are interested and I can share the calculations.</p>
<p><strong>Advantages with BYOS</strong>: Other than the cost difference there are two main advantages that you get with BYOS:</p>
<ol>
<li>You get direct support from the SUSE support team. With the Pay-as-you-go model, you need to open a ticket with Microsoft, and then when Microsoft support deems that the issue pertains to SUSE OS, then SUSE subject matter expert is involved to help you.</li>
<li>You get extended support for the OS versions that are out of general support.</li>
</ol>
<p>The second advantage itself is huge that even if you need to pay extra you would want that in your enterprise environments.</p>
<h2>Switching to BYOS from the Portal</h2>
<p>You can switch a VM from PAYG to BYOS directly from the Azure portal. To do this navigate to the VM within the Azure portal and navigate to the Configurations section as shown below. Then in the drop down select the option for &quot;<strong><em>SUSE Enterprise Linux</em></strong>&quot;. Review and select &quot;Yes&quot; radio button to use an existing SUSE subscription. Select the checkbox to confirm that you have an eligible SUSE subscription that you can attach to this VM.</p>
<img src="/images/164778365062372ee24ee49.png" alt="SUSE BYOS Configuration">
<h2>Automating the Process</h2>
<p>In the next post, we will look at the code sample for automating the process of upgrading the licensing from PAYG to BYOS. You can check this out here:
<a href="https://harvestingclouds.com/post/automating-the-enabling-of-byos-licensing-for-all-suse-vms-in-azure-code-sample/" target="_blank">Automating the enabling of BYOS licensing for all SUSE VMs in Azure - Code Sample</a></p>
<p><strong>Official Reference</strong>: <a href="https://docs.microsoft.com/en-us/azure/virtual-machines/linux/azure-hybrid-benefit-linux" target="_blank">How Azure Hybrid Benefit for PAYG Marketplace VMs applies for Linux virtual machines</a></p>]]></description>
<link>https://HarvestingClouds.com/post/switching-suse-vms-from-payg-to-byos-model</link>
<pubDate>Thu, 02 Dec 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Easily and automatically generate architecture diagrams in Azure with the Resource Visualizer</title>
<description><![CDATA[<p>Imagine you are in a rush and you need to create a diagram of your environment urgently. You wish there was a way to just select the resources and generate the diagram, that not only shows all the resources in the environment but also shows you the relation between different resources. And now there is such a way in Azure.</p>
<p><strong>Resource Visualizer</strong> is a new feature that is now integrated into the Azure portal. Access from any Resource Group's setting and quickly generate a diagram for all the resources in the resource group. </p>
<img src="/images/164790946662391a5aea37b.png" alt="Resource Visualizer">
<p>One of the neat features is that you can click on the &quot;<strong><em>Choose resources</em></strong>&quot; button at the top to select the resources you need in the visualizer (and exclude others). You can also zoom in and out by using the zoom controls at the bottom right corner. If you are not able to adjust and need to reset, you can click on the &quot;<strong><em>Zoom to fit</em></strong>&quot; button at the top. </p>
<p>I am sure this feature will become a go-to for quickly generating diagrams with dependencies for providing walk-throughs and creating architectural diagrams in Microsoft Azure. </p>]]></description>
<link>https://HarvestingClouds.com/post/easily-and-automatically-generate-architecture-diagrams-in-azure-with-the-resource-visualizer</link>
<pubDate>Mon, 29 Nov 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Automating the enabling of BYOS licensing for all Red Hat Enterprise Linux (RHEL) VMs in Azure - Code Sample</title>
<description><![CDATA[<p>In the previous post we looked at the difference between two deployment models to choose from when deploying Red Hat Enterprise Linux VMs (RHEL):</p>
<ul>
<li>PAYG - Pay as you go</li>
<li>BYOS - Bring your own subscription</li>
</ul>
<p>We also looked at the advantages of BYOS over the PAYG model. You can review the previous post here: <a href="https://harvestingclouds.com/post/switching-red-hat-enterprise-linux-rhel-vms-from-payg-to-byos-model/" target="_blank">Switching RHEL VMs from PAYG to BYOS model</a>.</p>
<p>In this post, we look at automating the process via a complete code sample.</p>
<h2>Script Location</h2>
<p>The complete script can be found on the GitHub here:
<a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Enable-RHELBYOSLicense" target="_blank">Enable-RHELBYOSLicense</a></p>
<h2>Script Workings</h2>
<p>The script works by setting the license type property of the VM to &quot;<strong>RHEL_BYOS</strong>&quot;.</p>
<pre><code>$vm.LicenseType = "RHEL_BYOS"</code></pre>
<p>The script then leverages the &quot;<em>Update-AzVM</em>&quot; cmdlet to update the VM with the license property.</p>
<pre><code>Update-AzVM -VM $vm -ResourceGroupName $currentRGName</code></pre>
<p>Make sure you update the input subscription name and also the Resource Group name. In the script, it uses Wildcard to find all resource groups that start with the string mentioned before &quot;*&quot;. You can use a similar wildcard match for the string or you can provide a specific Resource Group name.</p>
<p>Let me know if this helped you in any way in the comments below or at GitHub.</p>]]></description>
<link>https://HarvestingClouds.com/post/automating-the-enabling-of-byos-licensing-for-all-red-hat-enterprise-linux-rhel-vms-in-azure-code-sample</link>
<pubDate>Sat, 27 Nov 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Switching Red Hat Enterprise Linux (RHEL) VMs from PAYG to BYOS model</title>
<description><![CDATA[<p>In Azure, you have two deployment models to choose from when deploying Red Hat Enterprise Linux  VMs (RHEL):</p>
<ul>
<li>PAYG - Pay as you go</li>
<li>BYOS - Bring your own subscription</li>
</ul>
<p>If you selected Pay as you go in the beginning and want to switch to bring your own subscription (i.e. BYOS) model and do not want to do this per VM manually, then you can leverage the code sample from this post to automate the process easily.</p>
<p><strong>Cost difference</strong>: We did the math and found that BYOS works out to be cheaper. Although, the difference is not huge.</p>
<p><strong>Advantages with BYOS</strong>: Other than the cost difference there are two main advantages that you get with BYOS:</p>
<ol>
<li>You get direct support from the Red Hat support team. With the Pay-as-you-go model, you need to open a ticket with Microsoft, and then when Microsoft support deems that the issue pertains to RHEL OS, then RHEL subject matter expert is involved to help you.</li>
<li>You get extended support for the OS versions that are out of general support.</li>
</ol>
<p>The second advantage itself is huge that even if you need to pay extra you would want that in your enterprise environments.</p>
<h2>Switching to BYOS from the Portal</h2>
<p>You can switch a VM from PAYG to BYOS directly from the Azure portal. To do this navigate to the VM within the Azure portal and navigate to the Configurations section. Then in the drop-down select the option for &quot;<strong><em>Red Hat Enterprise Linux</em></strong>&quot;. Review and select the &quot;Yes&quot; radio button to use an existing RHEL subscription. Select the checkbox to confirm that you have an eligible RHEL subscription that you can attach to this VM.</p>
<img src="/images/164778843962374197dcb82.png" alt="RHEL BYOS Configuration">
<h2>Automating the Process</h2>
<p>In the next post, we will look at the code sample for automating the process of upgrading the licensing from PAYG to BYOS. You can check this out here:
<a href="https://harvestingclouds.com/post/automating-the-enabling-of-byos-licensing-for-all-red-hat-enterprise-linux-rhel-vms-in-azure-code-sample/" target="_blank">Automating the enabling of BYOS licensing for all Red Hat Enterprise Linux (RHEL) VMs in Azure - Code Sample</a></p>
<p><strong>Official Reference</strong>: <a href="https://docs.microsoft.com/en-us/azure/virtual-machines/linux/azure-hybrid-benefit-linux" target="_blank">How Azure Hybrid Benefit for PAYG Marketplace VMs applies for Linux virtual machines</a></p>]]></description>
<link>https://HarvestingClouds.com/post/switching-red-hat-enterprise-linux-rhel-vms-from-payg-to-byos-model</link>
<pubDate>Thu, 25 Nov 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure policies now let you customize non-compliance messages</title>
<description><![CDATA[<p>Azure policies now let you <strong>customize non-compliance messages</strong>. This looks like a small feature but helps a lot whenever a resource is not allowed by the policy. Instead of searching for why the policy denied the operation you can look at the non-compliance message and get a more in-depth idea. </p>
<p>That also means that the message should be descriptive enough in the first place. You should strategize and ensure that every policy assignment has a non-compliance message and that these messages are descriptive enough.</p>
<h2>Where to provide the non-compliance message</h2>
<p>You provide the non-compliance message when <strong>creating or editing the policy assignments</strong>. There is now a specific tab for the &quot;Non-compliance messages&quot; where you can provide a single text message. This message will give end-users an idea as to why the operation was denied for them.</p>
<img src="/images/16484917456241fce16513e.png" alt="Edit Policy Assignments">
<h2>Where do you see these in action</h2>
<p>When your operation is denied by a policy e.g. creation of a resource group, then you can click on the &quot;<strong><em>View error details -&gt;</em></strong>&quot; link at the top and then go to view the error details in the &quot;<strong><em>Raw Error</em></strong>&quot; tab. Here you will see a message property in the JSON that will have your descriptive non-compliance message.</p>
<img src="/images/16484917506241fce67b843.png" alt="Error message during resource group creation">
<p>I hope you will be able to proatively leverage this feature and enrich the end-user experience.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-policies-now-let-you-customize-non-compliance-messages</link>
<pubDate>Sun, 21 Nov 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Find all preview features that you can try now in Azure easily</title>
<description><![CDATA[<p>You can easily find all the preview features in Azure. You can also find detailed information about each of these features directly within the Azure portal. If you like a feature you can then try it out as well. Let's see where and how to do this.</p>
<h2>Finding all the preview features</h2>
<p>You can find all the preview features by navigating to the Azure subscription. Once within the subscription, you can scroll down in the settings and search for &quot;<strong>Preview features</strong>&quot;. This is the less known area that shows you all the latest pre-release features that are available for you to try out. </p>
<img src="/images/16479205106239457e6366a.png" alt="Preview features in Azure">
<p>Every feature has below details:</p>
<ol>
<li>For every preview feature there is a column for State. This shows if you are already registered or not registered for this feature. </li>
<li>Next it shows the provider that features pertains to. </li>
<li>It also shows the release date of the feature.</li>
<li>Finally, there is a link to the documentation to this feature for more details.</li>
</ol>
<p>You also have the option to filter the available features by text (at the top).</p>
<p>Review this section periodically to check the latest features that are available for you to try. I hope you find this tip useful.</p>]]></description>
<link>https://HarvestingClouds.com/post/find-all-preview-features-that-you-can-try-now-in-azure-easily</link>
<pubDate>Mon, 15 Nov 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Export all custom Azure Policies - Code Sample</title>
<description><![CDATA[<p><strong>Azure Policies</strong> are a key component in any Azure governance strategy. These help you allow or deny particular operations in your environment. These can also help you audit your environment for compliance with your standards (both out of the box and custom). There are two types of Azure policies:</p>
<ol>
<li><strong>Built-In</strong> - also called out of the box, that are provided by Microsoft</li>
<li><strong>Custom</strong> - that you build for your environment as per your policies and standards.</li>
</ol>
<p>It is a best practice to store all the custom policies in a version control system e.g. Azure DevOps repositories. If the policies were built directly within the Azure Policies definitions, then you would want to export all of the policies. Exporting one policy at a time can be time-consuming and also prone to end-user errors. You can leverage the script sample from this blog post to automate exporting all the custom policies in your environment (for all subscriptions).</p>
<h2>Script location in GitHub</h2>
<p>The complete script to export all the custom Azure policies across various subscriptions, can be found here:</p>
<p><a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Export-AllAzureCustomPolicies" target="_blank">Export-AllAzureCustomPolicies</a></p>
<h2>Script working</h2>
<p>The script works by iterating over various subscriptions and then fetching the Azure policies in that subscription. It filters on the &quot;<strong><em>PolicyType</em></strong>&quot; property to filter out only the policies that are Custom. </p>
<pre><code>$policies = Get-AzPolicyDefinition | where {$_.Properties.PolicyType -eq 'Custom'}</code></pre>
<p>Once all the custom policies have been fetched, the script then iterates through all policies using a &quot;<em>foreach</em>&quot; loop. For each policy, it sets the policy name to the subscription name then an underscore, and then the policy name. The file type is set to &quot;.json&quot;. </p>
<pre><code>$fileName = $currentSubscription.Name + "_" + $policyName + ".json"</code></pre>
<p>The script then exports the policy to a JSON file using the <strong><em>ConvertTo-Json</em></strong> cmdlet.</p>
<pre><code>$policy | ConvertTo-Json -Depth 10 | Out-File ".\Export-AzurePolicies\Output\$fileName"</code></pre>
<p>Give it a try in your environment. Once you have exported JSON files for all the custom policies, make sure to move this into a version control system like Azure DevOps repositories.</p>]]></description>
<link>https://HarvestingClouds.com/post/export-all-custom-azure-policies-code-sample</link>
<pubDate>Thu, 11 Nov 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Designing Tagging Strategy for Microsoft Azure - Index</title>
<description><![CDATA[<p>This is the <strong>Index</strong> for the series of blog posts regarding <strong>Designing Tagging Strategy for Microsoft Azure</strong>.</p>
<p><strong>Note</strong> that this Index is updated regularly as more posts are added around this topic.</p>
<ol>
<li><a href="https://harvestingclouds.com/post/designing-tagging-strategy-for-microsoft-azure-part-1-basics/" target="_blank">Basics and strategies in detail</a></li>
<li><a href="https://harvestingclouds.com/post/designing-tagging-strategy-for-microsoft-azure-part-2-enforcing-tags-at-the-resource-level/" target="_blank">Enforcing Tags at the Resource level</a></li>
<li><a href="https://harvestingclouds.com/post/designing-tagging-strategy-for-microsoft-azure-part-3-enforcing-tags-at-the-subscription-level-and-inheriting-on-underlying-resources/" target="_blank">Enforcing Tags at the Subscription level and Inheriting on underlying resources</a></li>
<li><a href="https://harvestingclouds.com/post/designing-tagging-strategy-for-microsoft-azure-part-4-enforcing-tags-at-the-resource-group-level-and-inheriting-on-underlying-resources/" target="_blank">Enforcing Tags at the Resource Group level and Inheriting on underlying resources</a></li>
<li><a href="https://harvestingclouds.com/post/designing-tagging-strategy-for-microsoft-azure-part-5-combining-strategies/" target="_blank">Combining strategies</a></li>
<li><a href="https://harvestingclouds.com/post/designing-tagging-strategy-for-microsoft-azure-part-6-auto-calculate-and-apply-tags/" target="_blank">Auto calculate and auto-apply Tags</a></li>
<li><a href="https://harvestingclouds.com/post/designing-tagging-strategy-for-microsoft-azure-part-7-enforcing-tagging-standards-with-azure-policies/" target="_blank">Enforcing tagging standards with Azure Policies</a></li>
</ol>
<p><strong>Additinal Links</strong>:</p>
<ul>
<li><a href="https://harvestingclouds.com/tag/policy/" target="_blank">All Azure policy related blogs</a></li>
<li><a href="https://harvestingclouds.com/tag/tag/" target="_blank">All Tagging related blogs</a></li>
</ul>]]></description>
<link>https://HarvestingClouds.com/post/designing-tagging-strategy-for-microsoft-azure-index</link>
<pubDate>Wed, 27 Oct 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Designing Tagging Strategy for Microsoft Azure - Part 7 - Enforcing tagging standards with Azure Policies</title>
<description><![CDATA[<p>This blog is a part of the <strong>Designing Tagging Strategy for Microsoft Azure</strong> series. You can find the <strong>Index</strong> of this series here: <a href="https://harvestingclouds.com/post/designing-tagging-strategy-for-microsoft-azure-index/" target="_blank">Designing Tagging Strategy for Microsoft Azure</a>.</p>
<h2>Enforce the tagging standards</h2>
<p>As discussed in the previous blogs, if you have any standards you want to enforce for the tagging you can do that directly within the Azure policy as well. E.g. for the ApplicationOwner Tag i.e. the owner of the application related to the resource deployed should be an email id. You can enforce this easily via an Azure policy.</p>
<p>You can also have naming conventions that you can enforce. Or you can have a set of values that you can have the tag value adhere to.</p>
<h2>Sample 1 - Set of Allowed Values</h2>
<p>You can specify allowed values for a tag. If the value provided is not from the list of values then the resource creation can be denied through the Azure policy. E.g. the below policy sample enforces the tag Environment's value to be only from the below list of allowed values. Note that the policy does not enforces the tag itself. It enforces that if the tag is specified then the allowed values can only be one of the below: </p>
<ul>
<li>dev</li>
<li>test</li>
<li>prod</li>
</ul>
<p>Policy sample:</p>
<pre><code>{
    "mode": "Indexed",
    "policyRule": {
      "if": {
        "not": {
            "field": "tags['Environment']",
          "in": [
            "dev",
            "test",
            "prod"
          ]
        }
      },
      "then": {
        "effect": "deny"
      }
    },
    "parameters": {}
  }</code></pre>
<h2>Sample 2 - Allowing only an Email Id</h2>
<p>You can also use wildcards and patterns within the Azure policy to enforce the naming standards etc. within the tags. E.g. if you want the tag for ApplicationOwner to be only an email id then you can easily do that by using &quot;like&quot; comparison to a text like &quot;<a href="mailto:b><i>*@domain.com</i">b><i>*@domain.com</i</a></b>&quot;</p>
<p>The below sample not only enforces the <strong>ApplicationOwner</strong> tag, but enforces it to have &quot;xxxx@harvestingcloud.com&quot; format, where xxxx could be anything as denoted by the &quot;*&quot; wildcard character in the policy below.</p>
<p>Policy sample:</p>
<pre><code>{
    "mode": "Indexed",
    "policyRule": {
      "if": {
        "not": {
          "allOf": [
            {
              "field": "tags['ApplicationOwner']",
              "exists": "true"
            },
            {
              "field": "tags['ApplicationOwner']",
              "like": "*@harvestingclouds.com"
            }
          ]
        }
      },
      "then": {
        "effect": "deny"
      }
    },
    "parameters": {}
  }</code></pre>
<h2>Complete Policy Samples on GitHub</h2>
<p>You can find the complete policy samples on the GitHub in my policy samples repository here: <a href="https://github.com/HarvestingClouds/AzurePolicySamples/tree/main/Tagging" target="_blank">AzurePolicySamples - Tagging</a>.</p>]]></description>
<link>https://HarvestingClouds.com/post/designing-tagging-strategy-for-microsoft-azure-part-7-enforcing-tagging-standards-with-azure-policies</link>
<pubDate>Wed, 20 Oct 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Moving an Azure Virtual Machine (VM) to an Availability Zone the automated way - with complete Code Sample</title>
<description><![CDATA[<p><strong>Availability Zones</strong> is the new concept to increase the availability of an Azure Virtual Machine (VM). It provides different datacenter with independent cooling, power, and network within one Azure Region. If you have an application deployed redundantly across more than one VM, then you definitely want to leverage Availability Zones to increase the availability and SLA of the workload.</p>
<p>Unfortunately, you can't move an Azure VM from a non-availability zone deployment to an availability zone. You also can't switch the zone after deploying the VM into one zone. The script sample in this post, help you address both of these scenarios in an automated manner.</p>
<h2>Word of Caution</h2>
<p>The script works by deleting the VM and then recreating it within an availability zone. If the script errors out in the middle, then you will end up with no VM. The script exports the VM configurations into a JSON file for this situation as a fallback option. But you need to know how to use that to recreate your VM. You should do below two things to ensure you don't run into issues:</p>
<ul>
<li>Try the script first for non-production and non-critical VMs. I recommend creating a dummy/temporary VM and then trying the script.</li>
<li>Ensure that there is a backup for any production/critical VMs before moving these into Availability Zones to ensure you have something to fail back to.</li>
</ul>
<h3>Script location in GitHub</h3>
<p>You can view and download the latest version of the script, directly from GitHub here: <a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Move-AzVMToAvailabilityZone" target="_blank">Move-AzVMToAvailabilityZone.ps1</a></p>
<h3>Script Working</h3>
<p>The script has a custom function called <strong><em>Export-VMConfig</em></strong>. It uses this function to export the VM configurations into a JSON file. This file serves as a backup (only for the configurations) in case anything goes wrong during the script execution. You can use this to recreate the VM.</p>
<p>The script then stops the VM using the <strong><em>Stop-AzVM</em></strong> command. It then creates a snapshot of the OS disk and then, creates an Azure Disk with Zone information from that snapshot. Then the script performs the same thing for all of the data disks. Then it deletes the Azure VM using the <strong><em>Remove-AzVM</em></strong> cmdlet. Then it starts building the configurations for the new VM using the <strong><em>New-AzVMConfig</em></strong> cmdlet. </p>
<p>The script reapplies the tags and diagnostic profile information.  Then it sets the OS disk as per the OS of the original VM. Then it adds the data disks. Then it adds NIC(s) and keeps the same NIC as primary. If there is a Public IP from the Basic SKU then the script removes it because it doesn't support zones. </p>
<p>Finally, the script recreates the VM using the <strong><em>New-AzVM</em></strong> cmdlet.</p>
<h3>Manual action after the script execution</h3>
<p>After the new VM has been validated, I recommend waiting anywhere from 24hrs to a week before cleaning up the residual non-availability zone resources. Once you are ready, delete the older OS and data disks and related snapshots. These should be not attached to any VMs.</p>
<p>I hope that the script helps you in moving your existing infrastructure to avail the redundancy options provided by the Availability Zones in Azure.</p>]]></description>
<link>https://HarvestingClouds.com/post/moving-an-azure-virtual-machine-vm-to-an-availability-zone-the-automated-way-with-complete-code-sample</link>
<pubDate>Sat, 16 Oct 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Designing Tagging Strategy for Microsoft Azure - Part 6 - Auto Calculate and Apply Tags</title>
<description><![CDATA[<p>This blog is a part of the <strong>Designing Tagging Strategy for Microsoft Azure</strong> series. You can find the <strong>Index</strong> of this series here: <a href="https://harvestingclouds.com/post/designing-tagging-strategy-for-microsoft-azure-index/" target="_blank">Designing Tagging Strategy for Microsoft Azure</a>.</p>
<h2>Auto Calculate and Apply tags</h2>
<p>As we discussed in the last post, whenever you can you should try to auto calculate and apply the tags in an automated fashion. Some examples include:</p>
<ul>
<li>This could be based on the name of the resource group if you have a naming convention defined for a resource group (by using substring on the resource group name). </li>
<li>Another use case could be calculating date time. You can leverage utcNow() and substring() functions to find date time as per your formatting standards.</li>
</ul>
<p>Let's look at the second use case as that is more generic and applicable to all scenarios. You can use this to apply the below tags:</p>
<ul>
<li><strong>BuildDate</strong> - date stamp when the resource is first deployed. Right now Azure does not store this metadata and without this tag, it is hard to find this down the line. So it is a good practice to tag each resource at the creation time.</li>
<li><strong>UpdateDate</strong> - date and time stamp whenever the resource is updated.</li>
</ul>
<h2>Policy sample</h2>
<p>Let's look at the <strong>BuildDate</strong> tag and how to apply this automatically for each and every resource. </p>
<p><strong>Note</strong>: </p>
<ul>
<li>The policy applies the tag in the MM/DD/YYYY format. This ensures that the BuildDate tag everywhere adheres to a standard.</li>
<li>The policy applies only when the tag does not exist on the resource. Once the tag is there, the policy will not apply to those resources (based on the if conditions specified below within the policy)</li>
</ul>
<p>Let's jump into the policy sample now. A complete sample can be found at my GitHub repository (link provided at the bottom of this post).</p>
<pre><code>"mode": "All",
"policyRule": {
  "if": {
    "anyOf": [
      {
        "field": "tags['BuildDate']",
        "exists": "false"
      },
      {
        "field": "tags['BuildDate']",
        "notMatch": "##/##/####"
      }
    ]
  },
  "then": {
    "effect": "modify",
    "details": {
      "roleDefinitionIds": [
        "/providers/microsoft.authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c"
      ],
      "operations": [
        {
          "operation": "addOrReplace",
          "field": "tags['BuildDate']",
          "value": "[concat(substring(utcNow(),5,2),'/', substring(utcNow(),8,2),'/',substring(utcNow(),0,4))]"
        }
      ]
    }
  }
}</code></pre>
<p>You can also find the policy for the UpdateDate tag in GitHub with a timestamp along with the date stamp.</p>
<h2>Complete Policy Samples on GitHub</h2>
<p>You can find the complete policy samples on the GitHub in my policy samples repository here: <a href="https://github.com/HarvestingClouds/AzurePolicySamples/tree/main/Tagging" target="_blank">AzurePolicySamples - Tagging</a>.</p>]]></description>
<link>https://HarvestingClouds.com/post/designing-tagging-strategy-for-microsoft-azure-part-6-auto-calculate-and-apply-tags</link>
<pubDate>Fri, 15 Oct 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Designing Tagging Strategy for Microsoft Azure - Part 5 - Combining strategies</title>
<description><![CDATA[<p>This blog is a part of the <strong>Designing Tagging Strategy for Microsoft Azure</strong> series. You can find the <strong>Index</strong> of this series here: <a href="https://harvestingclouds.com/post/designing-tagging-strategy-for-microsoft-azure-index/" target="_blank">Designing Tagging Strategy for Microsoft Azure</a>.</p>
<h2>Combining strategies</h2>
<p>In the previous few posts, we looked at individual tagging strategies. In this post, we will combine those strategies to build a custom tagging strategy for your environment. The combination looks at defining the tags at a higher level and then inheriting these tags at the resources under that level. This means that this combination tagging strategy will employ the following:</p>
<ol>
<li>Applying highest level tags at the <strong>Subscription level</strong> and then inheriting those at the lower levels</li>
<li>Applying the group level tags at the <strong>Resource Group level</strong> and then inherit those at the individual resources</li>
<li>Applying specific tags at the individual resource level</li>
<li>Enforcing other tagging related standards</li>
<li>Auto calculate and apply tags (as many as you can)</li>
</ol>
<p>Let's look at these in detail.</p>
<h3>1. Planning tags at the subcription level</h3>
<p>You want to apply the tags at the subscription level that are generic to the whole subscription. One example could be a tag for the <strong>Environment</strong> i.e. Dev, Test, Prod, etc. If you deploy different subscriptions for different environments then it is an easy one that you can apply in your environment.</p>
<h3>2. Planning tags at the resource group level</h3>
<p>Generally, you will want to group your resources by the department and a specific application within that department. Whatever is your grouping choice based on, you can have those tags planned for the resource group level and then inherit these automatically at the resource level. E.g. DepartmentName, ApplicationName, ApplicationOwner, etc. can be applied at the resource group level and then inherited below.</p>
<h3>3. Planning tags at the resources level</h3>
<p>There will be some information that needs to be tagged at the individual resource level. A couple of examples of this are:</p>
<ul>
<li><strong>BusinessCriticality</strong> - criticality of the resource based on the business requirements</li>
<li><strong>UpdateDate</strong> - Timestamp when a resource is updated</li>
</ul>
<h3>4. Enforce the tagging standards</h3>
<p>If you have any standards you want to enforce for the tagging you can do that directly within the Azure policy as well. E.g. for the ApplicationOwner Tag i.e. the owner of the application related to the resource deployed should be an email id. You can enforce this easily via an Azure policy.</p>
<p>You can also have naming conventions that you can enforce. Or you can have a set of values that you can have the tag value adhere to.</p>
<p>We will look at these in upcoming blogs.</p>
<h3>5. Auto calculate and apply tags</h3>
<p>Whenever you can you should try to auto calculate and apply the tags in an automated fashion. Some examples include:</p>
<ul>
<li>This could be based on the name of the resource group if you have a naming convention defined for a resource group (by using substring on the resource group name). </li>
<li>Another use case could be calculating date time. You can leverage utcNow() and substring() functions to find date time as per your formatting standards.</li>
</ul>
<p>We will look at these in detail in upcoming posts.</p>
<h2>Complete Policy Samples on GitHub</h2>
<p>You can find the complete policy samples on the GitHub in my policy samples repository here: <a href="https://github.com/HarvestingClouds/AzurePolicySamples/tree/main/Tagging" target="_blank">AzurePolicySamples - Tagging</a>.</p>]]></description>
<link>https://HarvestingClouds.com/post/designing-tagging-strategy-for-microsoft-azure-part-5-combining-strategies</link>
<pubDate>Thu, 14 Oct 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Designing Tagging Strategy for Microsoft Azure - Part 4 - Enforcing Tags at the Resource Group level and Inheriting on underlying resources</title>
<description><![CDATA[<p>This blog is a part of the <strong>Designing Tagging Strategy for Microsoft Azure</strong> series. You can find the <strong>Index</strong> of this series here: <a href="https://harvestingclouds.com/post/designing-tagging-strategy-for-microsoft-azure-index/" target="_blank">Designing Tagging Strategy for Microsoft Azure</a>.</p>
<p>In the last post, we looked at enforcing the tags at the subscription level and inheriting those at the resource level. In this post, we are looking at policies and various nuances in implementing the third strategy i.e. enforcing tags at the Resource Group level and inheriting at each resource.</p>
<h2>Require Tags on a Resource Group</h2>
<p>You can require tags on a Resource Group by using the below Azure policy. This is an extension of the in-built policies. This extends the policy to leverage multiple tags in a single policy.</p>
<p>The below policy is enforcing the <strong>ApplicationOwner</strong> and <strong>DepartmentName</strong> tags on the Resource Groups. It denies the creation of Resource Groups if any one of the tag is not present.</p>
<p><strong>Note</strong>: You can parameterize the tag name and its value as well very easily. You can check in-built policies for reference. Or let me know if this is something you want to see and I can create a parameterized sample.</p>
<pre><code>"policyRule": {
  "if": {
    "allOf": [
      {
        "field": "type",
        "equals": "Microsoft.Resources/subscriptions/resourceGroups"
      },
      {
        "anyOf": [
          {
            "field": "tags['ApplicationOwner']",
            "exists": "false"
          },
          {
            "field": "tags['DepartmentName']",
            "exists": "false"
          }
        ]
      }
    ]
  },
  "then": {
    "effect": "deny"
  }
}</code></pre>
<p>You will get an error similar to below when trying to create a resource group after you have defined and assigned this policy in your subscription.</p>
<img src="/images/1648366287624012cfc9f08.png" alt="Resource Group creation error due to tagging policy">
<p>When you get the above error after trying to create the Resource Group without tags (if you have the policy assigned already to enforce tags on RGs), then click on line #1 in the screenshot. In the pop-up blade that opens up, click on &quot;Raw Error&quot; as shown above as #2. Then scroll down to see the details of the policy that stopped the resource group creation.</p>
<h2>Inheriting Tags from Resource Groups</h2>
<p>You can easily inherit the tags from a Resource Group down to its underlying resources by using an Azure policy specifically for this purpose. You have <strong>three approaches for inheriting</strong> the tags from the resource group level:</p>
<ol>
<li>Inherit the tags and their value only if this tag is missing from the resource</li>
<li>Inherit and replace the tag if it doesn't match the value of the same tag at the resource group level</li>
<li>Inheriting the tags regardless of what is specified at the resource level</li>
</ol>
<p>Which approach works for you depends on your requirements. The first two approaches are identical to what we discussed in the last post for the subscription level. You can view those at this link and just replace subscription with resource group level functions: <a href="https://harvestingclouds.com/post/designing-tagging-strategy-for-microsoft-azure-part-3-enforcing-tags-at-the-subscription-level-and-inheriting-on-underlying-resources/" target="_blank">Enforcing Tags at the Subscription level and Inheriting on underlying resources</a>.</p>
<p>We have seen approach 3 being the most common and we will discuss that next.</p>
<h3>Inheriting the tags regardless of what is specified at the resource level</h3>
<p>In the below policy, you check that all of the below conditions should be met before the action can be taken:</p>
<ul>
<li>The ApplicationOwner and DepartmentName tags exist at the resource group level </li>
<li>Both of these tags does not have an empty value at the resource group level</li>
</ul>
<p>If both of these conditions are true then the &quot;modify&quot; effect is applied. Within this effect, the tags from the resource group level are applied at the resource level.</p>
<p><strong>Note</strong>: You can parameterize any values within this sample policy. Also, you can extend this policy to multiple tags (without creating additional policies).</p>
<pre><code>"policyRule": {
  "if": {
    "allOf": [
      {
        "value": "[resourceGroup().tags['ApplicationOwner']]",
        "exists": "true"
      },
      {
        "value": "[resourceGroup().tags['ApplicationOwner']]",
        "notEquals": ""
      },
      {
        "value": "[resourceGroup().tags['DepartmentName']]",
        "exists": "true"
      },
      {
        "value": "[resourceGroup().tags['DepartmentName']]",
        "notEquals": ""
      }
    ]
  },
  "then": {
    "effect": "modify",
    "details": {
      "roleDefinitionIds": [
        "/providers/microsoft.authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c"
      ],
      "operations": [
        {
          "operation": "addOrReplace",
          "field": "tags['ApplicationOwner']",
          "value": "[resourceGroup().tags['ApplicationOwner']]"
        },
        {
          "operation": "addOrReplace",
          "field": "tags['DepartmentName']",
          "value": "[resourceGroup().tags['DepartmentName']]"
        }
      ]
    }
  }
}</code></pre>
<h2>Complete Policy Samples on GitHub</h2>
<p>You can find the complete policy samples on the GitHub in my policy samples repository here: <a href="https://github.com/HarvestingClouds/AzurePolicySamples/tree/main/Tagging" target="_blank">AzurePolicySamples - Tagging</a>.</p>]]></description>
<link>https://HarvestingClouds.com/post/designing-tagging-strategy-for-microsoft-azure-part-4-enforcing-tags-at-the-resource-group-level-and-inheriting-on-underlying-resources</link>
<pubDate>Sun, 10 Oct 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Designing Tagging Strategy for Microsoft Azure - Part 3 - Enforcing Tags at the Subscription level and Inheriting on underlying resources</title>
<description><![CDATA[<p>This blog is a part of the <strong>Designing Tagging Strategy for Microsoft Azure</strong> series. You can find the <strong>Index</strong> of this series here: <a href="https://harvestingclouds.com/post/designing-tagging-strategy-for-microsoft-azure-index/" target="_blank">Designing Tagging Strategy for Microsoft Azure</a>.</p>
<p>In the last post, we looked at enforcing the tags at the resource level. In this post, we are looking at policies and various nuances in implementing the second strategy i.e. enforcing tags at the Subscription level and inheriting at each resource.</p>
<h2>Add or Replace Tags on a Subscription</h2>
<p>As the subscription will already exist, it doesn't make sense to &quot;<strong>require tags</strong>&quot; on the subscription. Instead, you will &quot;<strong>add or replace a tag</strong>&quot; on the subscription level.</p>
<p>You can add or replace a tag on a subscription by using the below Azure policy. This is an extension of the in-built policies. This combines multiple policies into a single one to simplify the management of policies.</p>
<p>The below policy is enforcing the <strong>Environment</strong> tag on the Subscription. It adds the tag if it is not present. If it is present and the value is not &quot;Dev&quot; then it updates the the tag's value to Dev. </p>
<p><strong>Note</strong>: You can parameterize the tag name and its value as well very easily. You can check in-built policies for reference. Or let me know if this is something you want to see and I can create a parameterized sample.</p>
<pre><code>"policyRule": {
  "if": {
    "allOf": [
      {
        "field": "type",
        "equals": "Microsoft.Resources/subscriptions"
      },
      {
        "anyOf": [
          {
            "field": "tags['Environment']",
            "exists": "false"
          },
          {
            "field": "tags['Environment']",
            "notEquals": "Dev"
          }
        ]
      }
    ]
  },
  "then": {
    "effect": "modify",
    "details": {
      "roleDefinitionIds": [
        "/providers/microsoft.authorization/roleDefinitions/4a9ae827-6dc8-4573-8ac7-8239d42aa03f"
      ],
      "operations": [
        {
          "operation": "addOrReplace",
          "field": "tags['Environment']",
          "value": "Dev"
        }
      ]
    }
  }
}</code></pre>
<h2>Inheriting Tags from Subscription</h2>
<p>You can easily inherit the tags from a subscription down to its underlying resources by using an Azure policy specifically for this purpose. You have <strong>two approaches for inheriting</strong> the tags from the subscription level:</p>
<ol>
<li>Inherit the tags and their value only if this tag is missing from the resource</li>
<li>Inherit and replace the tag if it doesn't match the value of the same tag at the subscription level</li>
</ol>
<p>Which approach works for you depends on your requirements. For both these scenarios, there are in-built Azure policies provided by Microsoft. Let's look at both of these. </p>
<h3>1. Inheriting the tag - only if it is missing</h3>
<p>In the below policy, you check that all of the below conditions should be met before the action can be taken:</p>
<ul>
<li>The Environment tag does not exist on the resource</li>
<li>The Environment tag at the subscription level is not equal to an empty value</li>
</ul>
<p>If both of these conditions are true i.e. the Environment tag is missing and has a valid value at the subscription level, then the &quot;modify&quot; effect is applied. Within this effect, the tag from the subscription level is applied at the resource level.</p>
<p><strong>Note</strong>: You can parameterize any values within this sample policy. Also, you can extend this policy to multiple tags (without creating additional policies).</p>
<pre><code>"policyRule": {
  "if": {
    "allOf": [
      {
        "field": "tags['Environment']]",
        "exists": "false"
      },
      {
        "value": "[subscription().tags['Environment']]",
        "notEquals": ""
      }
    ]
  },
  "then": {
    "effect": "modify",
    "details": {
      "roleDefinitionIds": [
        "/providers/microsoft.authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c"
      ],
      "operations": [
        {
          "operation": "addOrReplace",
          "field": "tags['Environment']",
          "value": "[subscription().tags['Environment']]"
        }
      ]
    }
  }
}</code></pre>
<h3>2. Inherit and replace the tag if it doesn't match the value of the same tag at the subscription level</h3>
<p>In the below policy, you check that all of the below conditions should be met before the action can be taken:</p>
<ul>
<li>The Environment tag at the resource level doesn't have the same value as the one at the subscription level</li>
<li>The Environment tag at the subscription level is not equal to an empty value</li>
</ul>
<p>If both of these conditions are true i.e. the Environment tag doesn't have the same value as the one at the subscription level and it has a valid value at the subscription level, then the &quot;modify&quot; effect is applied. Within this effect, the tag from the subscription level is applied at the resource level.</p>
<p><strong>Note</strong>: You can parameterize any values within this sample policy. Also, you can extend this policy to multiple tags (without creating additional policies).</p>
<pre><code>"policyRule": {
  "if": {
    "allOf": [
      {
        "field": "tags['Environment']]",
        "notEquals": "[subscription().tags['Environment']]"
      },
      {
        "value": "[subscription().tags['Environment']]",
        "notEquals": ""
      }
    ]
  },
  "then": {
    "effect": "modify",
    "details": {
      "roleDefinitionIds": [
        "/providers/microsoft.authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c"
      ],
      "operations": [
        {
          "operation": "addOrReplace",
          "field": "tags['Environment']",
          "value": "[subscription().tags['Environment']]"
        }
      ]
    }
  }
}</code></pre>
<h2>Complete Policy Samples on GitHub</h2>
<p>You can find the complete policy samples on the GitHub in my policy samples repository here: <a href="https://github.com/HarvestingClouds/AzurePolicySamples/tree/main/Tagging" target="_blank">AzurePolicySamples - Tagging/</a>.</p>]]></description>
<link>https://HarvestingClouds.com/post/designing-tagging-strategy-for-microsoft-azure-part-3-enforcing-tags-at-the-subscription-level-and-inheriting-on-underlying-resources</link>
<pubDate>Thu, 07 Oct 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Designing Tagging Strategy for Microsoft Azure - Part 2 - Enforcing Tags at the Resource level</title>
<description><![CDATA[<p>This blog is a part of the <strong>Designing Tagging Strategy for Microsoft Azure</strong> series. You can find the <strong>Index</strong> of this series here: <a href="https://harvestingclouds.com/post/designing-tagging-strategy-for-microsoft-azure-index/" target="_blank">Designing Tagging Strategy for Microsoft Azure</a>.</p>
<p>In the last post, we looked at the importance of tagging in Azure and various tagging strategies. You can view that post here: <a href="https://harvestingclouds.com/post/designing-tagging-strategy-for-microsoft-azure-part-1-basics/" target="_blank">Designing Tagging Strategy for Microsoft Azure - Part 1 - Basics</a>. In this post, we are looking at policies and various nuances in implementing the first strategy i.e. enforcing tags at the resource level. </p>
<h2>Requiring Tags on a resource</h2>
<p>You can require a tag on a resource by using the below Azure policy. This is an extension of the in-built policy that requires one tag. This extends it to two tags and shows you how you can add even more tags using just one policy.</p>
<pre><code>"policyRule": {
  "if": {
        "anyOf": [
          {
            "field": "tags['ApplicationOwner']",
            "exists": "false"
          },
          {
            "field": "tags['DepartmentName']",
            "exists": "false"
          }
        ]
  },
  "then": {
    "effect": "deny"
  }
}</code></pre>
<p>The above policy requires two tags:</p>
<ol>
<li>ApplicationOwner - Person responsible for the resource</li>
<li>DepartmentName - Name of the department to which the resource belongs</li>
</ol>
<h2>Requiring a Tag and also enforcing a format</h2>
<p>Let's assume that you require the ApplicationOwner tag. But you also want to ensure that the user trying to deploy it are using an email id. E.g. if the domain is HarvestingClouds.com then the email id will look something like xxxx@harvestingclouds.com, where &quot;xxxx&quot; could be anything. In policy you will match this by using the &quot;<em>&quot; wildcard character. The match should be then with &quot;<b><i></em>@harvestingclouds.com</i></b>&quot;. The policy will look something like below:</p>
<pre><code>"policyRule": {
  "if": {
    "not": {
      "allOf": [
        {
          "field": "tags['ApplicationOwner']",
          "exists": "true"
        },
        {
          "field": "tags['ApplicationOwner']",
          "like": "*@harvestingclouds.com"
        }
      ]
    }
  },
  "then": {
    "effect": "deny"
  }
}</code></pre>
<h2>Complete Policy Samples on GitHub</h2>
<p>You can find the complete policy samples on the GitHub in my policy samples repository here: <a href="https://github.com/HarvestingClouds/AzurePolicySamples/tree/main/Tagging" target="_blank">AzurePolicySamples - Tagging/</a>.</p>]]></description>
<link>https://HarvestingClouds.com/post/designing-tagging-strategy-for-microsoft-azure-part-2-enforcing-tags-at-the-resource-level</link>
<pubDate>Mon, 04 Oct 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Designing Tagging Strategy for Microsoft Azure - Part 1 - Basics</title>
<description><![CDATA[<p>This blog is a part of the <strong>Designing Tagging Strategy for Microsoft Azure</strong> series. You can find the <strong>Index</strong> of this series here: <a href="https://harvestingclouds.com/post/designing-tagging-strategy-for-microsoft-azure-index/" target="_blank">Designing Tagging Strategy for Microsoft Azure</a>.</p>
<p>One of the most useful yet most ignored features within Microsoft Azure is <strong>Tagging</strong>. Tags provide a way to add additional data to your resources. Having a strong tagging strategy from the beginning can provide more insights into your environment. They are especially helpful in understanding how the cost is distributed in your environment. In this post, we are going to look at a few strategies to implement tagging in your environment. </p>
<h2>Where and What Tags to consider</h2>
<p>Think of <strong>Tags</strong> as additional <strong>metadata</strong> that is added to the resources within Azure. Everything is a resource and therefore can have tags. At a bare minimum, any resource that generates billing should have a tag applied. </p>
<p>The <strong>first thing</strong> you need to define when it comes to tagging is what tags you should consider enforcing/applying in your environment. Here are some that I recommend:</p>
<ol>
<li><strong>ApplicationOwner</strong> - Owner of the application to which the resource belongs.</li>
<li><strong>ContactDL</strong> - Distribution List (DL) that can be contacted by the cloud operations team in case there is any action that needs to be taken related to the resource.</li>
<li><strong>DepartmentName</strong> - Department to which the resource belongs.</li>
<li><strong>ApplicationName</strong> - Application name for which the resource has been deployed</li>
<li><strong>CostCenter</strong> - Cost center that should be billed (within the organization).</li>
<li><strong>ResourceBuildDate</strong> - Date when the resource was first deployed</li>
<li><strong>ResourceUpdateDate</strong> - Date when the resource was updated in any way.</li>
<li><strong>BusinessCriticality</strong> - Business criticality of the application. The highest criticality applications should be deployed in a redundant fashion.</li>
</ol>
<p><strong>Managing shared resources</strong> - There will be many resources (e.g. virtual networks) that will be used by various departments. These should be tied to the cloud operations team. The expense for these resources should either be shared equally by each department or a weighted distribution based on the overall cloud resources consumption. </p>
<h2>How to enforce Tags</h2>
<p>Once you have defined the tagging strategy, you will need to ensure that the tags are applied for each and every resource that should have tags on them. The easiest way to enforce this is by using <strong>Azure Policy</strong>. We will be looking at this in the next blog.</p>
<h2>Strategy 1 - Applying Tags for each resource</h2>
<p>The easiest and most basic strategy is to apply the tags for each and every resource. You simply define Azure policies to enforce the tagging for all resources. Also, you define the policy for each and every tag you want to enforce.</p>
<p>The downside to this policy are:</p>
<ol>
<li>Some resources don't have the ability to apply tags when creating these from the Azure portal. So you will need to apply these resources programmatically via ARM Templates, Bicep configurations, Azure CLI, PowerShell, etc.</li>
<li>You will need to manage the tags at the resource level. Even if the basic tags like BusinessCriticality etc. are not specified, the resource creation will fail.</li>
</ol>
<p>To enhance this strategy you can try to calculate some tags and auto-apply those when each resource is created or updated. E.g. Build Date can be auto-calculated, validated and applied to each resource during its creation time. </p>
<h2>Strategy 2 -  Applying Tags at the Subscription and inheriting at each resource</h2>
<p>The next strategy is to specify the key tags at the subscription level and then apply these automatically at the lower levels. You can use Azure policies to auto-apply this. The biggest advantage of this strategy is that you don't need to apply these tags for each and every resource. You just apply the tags at the subscription level and then auto-apply these to each resource within the subscription when those resources are deployed. An example would be a tag for the environment. If you have a subscription for each environment (i.e. one for dev, another for test/QA, and another one for staging and a separate one for production), then you can have an environment tag defined at the subscription level and auto-applied to each resource deployed within that subscription.</p>
<p>The downside to this strategy is that you can use this only for the most generic resources. E.g. within your subscription, you will have different resource groups for different departments, so you can't have the tag for DepartmentName applied at the subscription level (unless you have a subscription corresponding to one department only).</p>
<h2>Strategy 3 -  Applying Tags at the Resource Groups and inheriting at each resource</h2>
<p>Similar to the above strategy, in this third strategy, you enforce the tags at the resource group level and then automatically apply these to each resource that is deployed within the resource group. </p>
<p>Similar to the above strategy you get the advantage of auto applying tags and not think about those when planning to deploy individual resources. As a best practice, you should still plan for the tags for each resource, but these will be auto inherited if you miss any. E.g. If you are creating a Resource Group for a specific department and a specific application within that department then you can easily define the DepartmentName and ApplicationName at the Resource Group level and then auto inherits those for each resource that is deployed within that resource group.</p>
<p>The only disadvantage is that any resource-specific tags still need to be applied at the resource level. E.g. Business Criticality may vary for each resource even within a resource group.</p>
<h2>Strategy 4 - A mix of the above strategies</h2>
<p>The best strategy is a mix of all the above strategies as below:</p>
<ol>
<li>The most generic tags should only be deployed at the <strong>subscription level</strong> and then auto inherited for all resources under that subscription</li>
<li>The <strong>Resource Group</strong> should enforce tags that are specific to that group and then auto inherit for all resources under that resource group</li>
<li>The <strong>resource-specific tags</strong> should be either auto-calculated and applied or enforced for each resource. You should also auto-calculate, validate, and auto-apply as many resources as you can.</li>
</ol>
<p>E.g. you can have an environment tag at the subscription level, a department and application tag at the resource group level, a build data and update date at the resource level auto calculated and applied, and other tags enforced at the resource level. </p>
<p>In the next posts, we will look at ways to apply these strategies with policy samples.</p>]]></description>
<link>https://HarvestingClouds.com/post/designing-tagging-strategy-for-microsoft-azure-part-1-basics</link>
<pubDate>Sat, 02 Oct 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Backup Series - Index</title>
<description><![CDATA[<p>This is the <strong>Index</strong> for the series of blog posts regarding the <strong>Azure Backup service</strong>.</p>
<p><strong>Note</strong> that this Index is updated regularly as more posts are added around this topic.</p>
<ol>
<li><a href="https://HarvestingClouds.com/post/azure-backup-set-storage-redundancy-for-recovery-services-vault" target="_blank">Set Storage Redundancy for Recovery Services Vault</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-backup-cross-region-restore-crr-for-azure-virtual-machines-using-azure-backup" target="_blank">Cross Region Restore (CRR) for Azure Virtual Machines using Azure Backup</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-backup-soft-delete-for-recovery-services-vault" target="_blank">Soft Delete for Recovery Services Vault</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-backup-bring-your-own-keys-byok-for-recovery-services-vaults-1-overview" target="_blank">Bring your own keys (BYOK) for recovery services vaults - 1 Overview</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-backup-bring-your-own-keys-byok-for-recovery-services-vaults-2-enable-managed-identity-for-your-recovery-services-vault" target="_blank">Bring your own keys (BYOK) for recovery services vaults - 2 Enable managed identity for your Recovery Services vault</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-backup-bring-your-own-keys-byok-for-recovery-services-vaults-3-assign-permissions-to-the-vault-to-access-the-encryption-key-in-the-azure-key-vault" target="_blank">Bring your own keys (BYOK) for recovery services vaults - 3 Assign permissions to the vault to access the encryption key in the Azure Key Vault</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-backup-bring-your-own-keys-byok-for-recovery-services-vaults-4-enable-soft-delete-and-purge-protection-on-the-azure-key-vault" target="_blank">Bring your own keys (BYOK) for recovery services vaults - 4 Enable Soft Delete and purge protection on the Azure Key Vault</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-backup-bring-your-own-keys-byok-for-recovery-services-vaults-5-assign-the-encryption-key-to-the-recovery-services-vault" target="_blank">Bring your own keys (BYOK) for recovery services vaults - 5 Assign the encryption key to the Recovery Services vault</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-backup-backup-center-in-microsoft-azure" target="_blank">Backup Center in Microsoft Azure</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-backup-understanding-the-backup-policy-for-sql-server-in-azure-vm" target="_blank">Understanding the backup policy for SQL Server in Azure VM</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-backup-understanding-the-backup-policy-for-sap-hana-in-azure-vm-database" target="_blank">Understanding the backup policy for SAP HANA in Azure VM (Database)</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-backup-troubleshooting-the-backup-issues-for-sql-server-on-azure-vm-or-sap-hana-on-azure-vm-database" target="_blank">Troubleshooting the backup issues for SQL Server on Azure VM or SAP HANA on Azure VM (Database)</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-backup-configuring-azure-backup-for-azure-file-shares" target="_blank">Configuring Azure Backup for Azure File Shares</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-backup-cross-region-restore-crr-for-azure-virtual-machines-is-now-generally-available" target="_blank">Cross Region Restore (CRR) for Azure Virtual Machines is now generally available</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-backup-managing-the-backup-related-alerts-via-the-azure-monitor" target="_blank">Managing the backup-related alerts via the Azure Monitor</a></li>
<li><a href="https://harvestingclouds.com/post/simplifying-the-azure-backup-center-part-1/" target="_blank">Simplifying the Azure Backup Center - Part 1</a></li>
<li><a href="https://harvestingclouds.com/post/simplifying-the-azure-backup-center-part-2/" target="_blank">Simplifying the Azure Backup Center - Part 2</a></li>
<li><a href="https://harvestingclouds.com/post/azure-recovery-services-vaults-vs-backup-vaults/" target="_blank">Azure Recovery Services Vaults vs Backup Vaults</a></li>
</ol>]]></description>
<link>https://HarvestingClouds.com/post/azure-backup-series-index</link>
<pubDate>Thu, 30 Sep 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Recovery Services Vaults vs Backup Vaults</title>
<description><![CDATA[<p><strong>Azure Recovery Services Vaults</strong> has been the place where you configured all backups. Azure <strong>Backup Vaults</strong> is the new area where you configure any new backup features. Let's look at the difference between the two experiences when dealing with the backup of key resources and data.</p>
<h2>The resources they protect</h2>
<p>The key difference between the two vault types is the type of datasource that can be protected by that vault. Let's take a look at each vault's supported datasource types.</p>
<p><strong>Azure Recovery Services vaults</strong> can protect the following types of datasources:</p>
<ul>
<li>Azure Virtual machines</li>
<li>SQL in Azure VM</li>
<li>Azure Files (Azure Storage)</li>
<li>SAP HANA in Azure VM</li>
<li>Azure Backup Server</li>
<li>Azure Backup Agent</li>
<li>DPM</li>
</ul>
<p><strong>Azure Backup vaults</strong> can protect the following types of datasources:</p>
<ul>
<li>Azure Database for PostgreSQL servers</li>
<li>Azure Blobs (Azure Storage)</li>
<li>Azure Disks</li>
<li>Kubernetes Service</li>
<li>AVS Virtual machines</li>
</ul>
<img src="/images/1648101762623c098286f87.png" alt="Supported datasources for each vault type">
<h2>Backup Center - a single pane of glass to backup management</h2>
<p><strong>Azure Backup Center</strong> is the place that unifies the management of the backup of all the datasources across different vaults. We explored this in detail in the below two blog posts. Check these out to learn more about the backup center:</p>
<ul>
<li><a href="https://harvestingclouds.com/post/simplifying-the-azure-backup-center-part-1/" target="_blank">Simplifying the Azure Backup Center - Part 1</a></li>
<li><a href="https://harvestingclouds.com/post/simplifying-the-azure-backup-center-part-2/" target="_blank">Simplifying the Azure Backup Center - Part 2</a></li>
</ul>]]></description>
<link>https://HarvestingClouds.com/post/azure-recovery-services-vaults-vs-backup-vaults</link>
<pubDate>Tue, 28 Sep 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>New Feature Alert - Zone redundant storage (ZRS) for Azure Disk Storage and Azure Storage Accounts</title>
<description><![CDATA[<p>With <strong>Availability Zones</strong> there is a new way for the data to be made redundant as well in a zone-aware deployment. This is known as &quot;<strong>Zone Redundant Storage</strong>&quot; or <strong>ZRS</strong>. This option replicates the data across different availability zones and therefore protects you against the datacenter level failures. It is highly recommended for the architecture involving highly available (HA) design/solution.</p>
<h2>ZRS for Managed Disks</h2>
<p>ZRS as an option for the managed disk is the new feature that has just been introduced.</p>
<p>Note that this option is available for: </p>
<ul>
<li>Azure Premium SSDs</li>
<li>Standard SSDs</li>
</ul>
<p>At the time of writing of this post, this option is only available for the below regions:</p>
<ul>
<li>West Europe  </li>
<li>North Europe </li>
<li>West US 2 </li>
<li>France Central</li>
</ul>
<p>When you will try to create a new managed disk, while selecting the disk size, you can modify its redundancy option. If you are using one of the region mentioned above, you will view &quot;Zone-redundant storage&quot; as the option for the <strong>disk SKU</strong>. You will have the option to select either premium SSD or Standard SSD only.</p>
<img src="/images/1647932669623974fde0111.png" alt="ZRS Option for Managed Disks">
<h2>ZRS and GZRS options for Storage Accounts</h2>
<p>For the Storage Accounts, this feature is more widely available across various regions. This feature has been available for some time now for the storage accounts.</p>
<p>When creating new storage accounts, you can select ZRS and GZRS for the storage account data redundancy. These stand for:</p>
<ul>
<li><strong>ZRS</strong> - Zone-redundant storage - data is replicated across different zones.</li>
<li><strong>GZRS</strong> - Geo-zone-redundant storage - this is a mix of GRS and ZRS. In addition to data being replicated across zones, it is also replicated across different geo regions.</li>
</ul>
<img src="/images/164793267762397505de820.png" alt="ZRS Option for Storage Accounts">
<h2>Biggest Advantages</h2>
<p>There are lots of advantages to leveraging ZRS storage redundancy in your solutions. The big ones include:</p>
<ul>
<li>If a virtual machine becomes unavailable in an affected zone, you can continue to work with the disk by mounting it to a virtual machine in a different zone. </li>
<li>You can also use the ZRS option with shared disks to provide improved availability for clustered or distributed applications E.g. SQL FCI, SAP ASCS/SCS, or GFS2.</li>
</ul>
<p><strong>References</strong>: </p>
<ul>
<li><a href="https://azure.microsoft.com/en-us/updates/zone-redundant-storage-zrs-for-azure-disk-storage-now-generally-available/" target="_blank">Zone redundant storage (ZRS) for Azure Disk Storage</a></li>
<li><a href="https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy#redundancy-in-the-primary-region" target="_blank">Azure Storage redundancy in the primary region</a></li>
</ul>]]></description>
<link>https://HarvestingClouds.com/post/new-feature-alert-zone-redundant-storage-zrs-for-azure-disk-storage-and-azure-storage-accounts</link>
<pubDate>Mon, 20 Sep 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Simplifying the Azure Backup Center - Part 2</title>
<description><![CDATA[<p>In the previous post, we looked at the overview of the <strong>Backup Center</strong>. You can read the first part here: <a href="https://harvestingclouds.com/post/simplifying-the-azure-backup-center-part-1/" target="_blank">Simplifying the Azure Backup Center - Part 1</a> </p>
<p>In this post, we will look closely at various sections and actions within it. When you launch Backup Center, as we saw in the previous post, you are presented with the Overview screen. This screen is powerful in itself in presenting you with a unified view of backup of various datasource types across your environment. Other parts of the Backup Center are clubbed into the following three main categories:</p>
<ol>
<li>Management</li>
<li>Monitoring &amp; Reporting</li>
<li>Policy and Compliance</li>
</ol>
<p>Let's look at these three areas in detail.</p>
<h2>1. Management</h2>
<p>The management section lets you manage various aspects of your backup. The first aspect is the <strong>backup instances</strong>. Here you can view and manage various backup instances filtered by your datasource type. The key thing to note is that the view and various columns that are shown automatically update based on the datasource type selected. You can also set up new backup and /or trigger restore from the existing backup by using the buttons at the top of the page. </p>
<img src="/images/1648095666623bf1b2aca48.png" alt="Backup Instances">
<p>Next section &quot;Backup Policies&quot; let's you view and manage backup policies across your environment. You can make updates as required. You can also create new backup policies as well.</p>
<img src="/images/1648095675623bf1bb25e2f.png" alt="Backup Policies">
<p>The third and the last section in this category is the ability to view Vaults (both backup and recovery services vaults) across your environment. You can navigate to those vaults by clicking on the corresponding row. You can also create a new vault from here.</p>
<img src="/images/1648095681623bf1c1cccca.png" alt="Vaults">
<h2>2. Monitoring and Reporting</h2>
<p>The monitoring and reporting section helps you with supporting your backup infrastructure. It has the following three sections:</p>
<ol>
<li><strong>Backup jobs</strong> - shows you all the backup-related jobs, across your environment.</li>
<li><strong>Alerts</strong> - shows you all alerts related to your backup data across different vaults.</li>
<li><strong>Metrics</strong> - provide monitoring of various metrics on the backup data. The main two metrics available are &quot;Backup health events&quot; and &quot;Restore health events&quot;.</li>
<li><strong>Backup reports</strong> - detailed reporting into the backup data linked to log analytics.</li>
</ol>
<img src="/images/1648096240623bf3f0133b7.png" alt="Backup jobs">
<h2>3. Policy and Compliance</h2>
<p>The policy and compliance section is all about the governance of the backup data. The main sub-sections in this section are:</p>
<ol>
<li><strong>Backup compliance</strong> - it shows you the compliance of various resources related to backup based on the policies and initiatives applied in your environment. It looks for the backup category for the policies and initiatives.</li>
<li><strong>Azure policies for backup</strong> - All the Azure policies, filtered by the &quot;backup&quot; category. You can also view the initiatives here by updating the drop down based filters.</li>
<li><strong>Protectable datasources</strong> - These are all the resources that can be protected. </li>
</ol>
<p>Protectable datasources is very important section as it helps you easily find all the resources for which you haven't configured the protection. Filter by datasource type to view the resources that can be protected for that type. E.g. in the image below it shows you all the VMs that can be protected via the backup services.</p>
<img src="/images/1648096247623bf3f789f98.png" alt="Protectable datasources">
<p>That is all there is to it. Explore all these sections one by one and play around with different options available in each. Backup Center is very powerful in unifying the backup experience and providing a single pane of glass to monitor and manage your backups.</p>
<p><strong>References</strong>: </p>
<ul>
<li><a href="https://docs.microsoft.com/en-us/azure/backup/backup-center-overview" target="_blank">Overview of Backup center</a></li>
<li><a href="https://docs.microsoft.com/en-us/azure/backup/backup-center-support-matrix" target="_blank">Support matrix for Backup center</a></li>
</ul>]]></description>
<link>https://HarvestingClouds.com/post/simplifying-the-azure-backup-center-part-2</link>
<pubDate>Fri, 17 Sep 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Simplifying the Azure Backup Center - Part 1</title>
<description><![CDATA[<p><strong>Azure Backup Center</strong> is the new one-stop-shop for all your backup management related tasks within Microsoft Azure. It unifies the backup data from various vaults across different subscriptions within your tenant.</p>
<h2>Navigating to the Backup Center</h2>
<p>To navigate to the <strong>Backup Center</strong>, just go to all Services and just search for it. Or better approach is to find it by searching for it at the top search bar. Once launched you get the view shown in the below image. Note at the middle top section that you can filter the data being shown by following categories:</p>
<ul>
<li>Subscription (across your tenant that you have access to)</li>
<li>Resource Group</li>
<li>Location</li>
<li>Type (type of the backup data)</li>
<li>Vault (where it is being protected)</li>
</ul>
<img src="/images/1648000888623a7f78dec92.png" alt="Backup Center homepage">
<p>The Datasource types that you can manage directly from the Backup Center include:</p>
<ul>
<li>Azure Virtual machines</li>
<li>SQL in Azure VM</li>
<li>Azure Files (Azure Storage)</li>
<li>SAP HANA in Azure VM</li>
<li>Azure Disks</li>
<li>Azure Blobs (Azure Storage)</li>
<li>Azure Datacenter for PostgreSQL servers</li>
</ul>
<img src="/images/1648000897623a7f8116ec9.png" alt="Datasource type">
<h2>Allowed Actions</h2>
<p>Let's check out what all you can do with the Backup Center.</p>
<style>
  table,
  th,
  td {
    border: 1px solid black;
    border-collapse: collapse;
  }
</style>
<table>
  <tr>
    <td>
      <b>Category</b>
    </td>
    <td>
      <b>Actions</b>
    </td>
  </tr>
  <tr>
    <td>
      <b>Monitoring</b>
    </td>
    <td>
      <ul>
        <li>View all Jobs</li>
        <li>View all backup instances</li>
        <li>View all backup policies</li>
        <li>View all vaults</li>
        <li>View Azure Monitor alerts at scale</li>
        <li>View Azure Backup metrics and write metric alert rules</li>
      </ul>
    </td>
  </tr>
  <tr>
    <td>
      <b>Actions</b>
    </td>
    <td>
      <ul>
        <li>Configure backup</li>
        <li>Restore Backup Instance</li>
        <li>Create vault</li>
        <li>Create backup policy</li>
        <li>Execute on-demand backup for a backup instance</li>
        <li>Stop backup for a backup instance</li>
        <li>Execute cross-region restore job from Backup center</li>
      </ul>
    </td>
  </tr>
  <tr>
    <td>
      <b>Insights</b>
    </td>
    <td>
      <ul>
        <li>View Backup Reports</li>
      </ul>
    </td>
  </tr>
  <tr>
    <td>
      <b>Governance</b>
    </td>
    <td>
      <ul>
        <li>View and assign built-in and custom Azure Policies under category Backup</li>
        <li>View datasources not configured for backup</li>
      </ul>
    </td>
  </tr>
</table>
<h2>Unsupported Scenario</h2>
<p>Currently, the only unsupported scenario is updating the vault settings at scale. You have to navigate to each individual vault to update its settings.</p>
<p>In the next post, we will look more closely at various sections and actions within the Backup Center. You can read the second part here: <a href="https://harvestingclouds.com/post/simplifying-the-azure-backup-center-part-2/" target="_blank">Simplifying the Azure Backup Center - Part 2</a></p>
<p><strong>References</strong>: </p>
<ul>
<li><a href="https://docs.microsoft.com/en-us/azure/backup/backup-center-overview" target="_blank">Overview of Backup center</a></li>
<li><a href="https://docs.microsoft.com/en-us/azure/backup/backup-center-support-matrix" target="_blank">Support matrix for Backup center</a></li>
</ul>]]></description>
<link>https://HarvestingClouds.com/post/simplifying-the-azure-backup-center-part-1</link>
<pubDate>Tue, 14 Sep 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Automatically trigger on-demand SAP HANA on Azure VM Backup for multiple Databases - Code Sample</title>
<description><![CDATA[<p>Let's assume you have various <strong>SAP HANA databases on Azure VMs</strong> that are being backed up in Azure Recovery Services Vault using the Backup services i.e. the daily backup. Before performing any major activity in your environment, you will want to take an on-demand backup. This helps you to have a fallback option if anything goes wrong during the activity. To extend the expiry date you may want to update the policy temporarily as the on-demand backup for SAP HANA on Azure VMs, doesn't allow the feature to specify the expiry date.</p>
<p>The script I present in this post automates the triggering of this on-demand backup for multiple SAP HANA databases. The script takes the name of VMs and related SAP HANA database information in a csv file as input.</p>
<p><strong>NOTE</strong>: As the feature to trigger the SAP HANA database is available only with the CLI cmdlet, we are using the CLI command. Please ensure that you have AZ CLI also installed in your environment by following the instructions here: <a href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli" target="_blank">How to install the Azure CLI</a>.</p>
<h2>Script location</h2>
<p>The latest version of the script can be found in GitHub here: <a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Trigger-AzSAPHANAOnDemandBackup" target="_blank">Trigger-AzSAPHANAOnDemandBackup</a></p>
<p>The input CSV file sample is also located at the same location. Use this file to provide the VM names and related SAP HANA database information as an input to the script. Update the path to this file in the script input variables. Update this file as per your environment.</p>
<h2>Script Working</h2>
<p>The script starts by checking if the input config file (with VM names) exists or not. If not, then the script prints the message to check the path and exit. It then imports the CSV file. Then it sets the context to the right recovery services vault and starts iterating through the VMs. Then it sets the item and container variables as below.</p>
<pre><code>$itemName = "SAPHanaDatabase;$instanceName;$databaseName"
$containerName = "VMAppContainer;Compute;$VMResourceGroup;$vmName"</code></pre>
<p>Finally, it triggers the on-demand backup using the below az cli command.</p>
<pre><code>az backup protection backup-now --resource-group $rgOfVault --item-name $itemName --vault-name $vaultName --container-name $containerName --backup-type Full --output table</code></pre>
<p>I hope this script helps you save lots of time during on-demand backup creation. If you have any improvements you would like to see in the script feel free to provide that as a comment here or a pull request on GitHub.</p>]]></description>
<link>https://HarvestingClouds.com/post/automatically-trigger-on-demand-sap-hana-on-azure-vm-backup-for-multiple-databases-code-sample</link>
<pubDate>Tue, 24 Aug 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Automatically trigger on-demand Azure VM Backup for multiple VMs - Code Sample</title>
<description><![CDATA[<p>Over the period of time, you will have multiple VMs being protected in Azure Recovery Services Vault using the Backup services i.e. the daily <strong>Azure VM Backup</strong>. Before performing any major activity in your environment, you will want to take an on-demand backup with an extended expiry date. This helps you to have a fallback option if anything goes wrong during the activity. </p>
<p>The script I present in this post automates the triggering of this on-demand backup for multiple VMs and also lets you specify a date for the expiry of this backup. The script takes the name of VMs in a csv file as input.</p>
<h2>Script location</h2>
<p>The latest version of the script can be found in GitHub here:
<a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Trigger-AzVMOnDemandBackup" target="_blank">Trigger-AzVMOnDemandBackup</a></p>
<p>The input CSV file is also located at the same location. Use this file to provide the VM names as an input to the script. Update the path to this file in the script input variables.</p>
<h2>Script Working</h2>
<p>The script starts by checking if the input config file (with VM names) exists or not. If not, then the script prints the message to check the path and exit. It then imports the CSV file. Then it sets the context to the right recovery services vault and starts iterating through the VMs. Then it fetches the backup container for the current VM using the below command. </p>
<pre><code>$backupcontainer = Get-AzRecoveryServicesBackupContainer `
            -ContainerType "AzureVM" `
            -FriendlyName $vmName</code></pre>
<p>It uses the container to fetch the backup item corresponding to the current VM. </p>
<pre><code>$item = Get-AzRecoveryServicesBackupItem `
            -Container $backupcontainer `
            -WorkloadType "AzureVM"</code></pre>
<p>Finally, it triggers the on-demand backup using the <strong><em>Backup-AzRecoveryServicesBackupItem</em></strong> cmdlet.</p>
<pre><code>Backup-AzRecoveryServicesBackupItem -Item $item -ExpiryDateTimeUTC $dateTillExpiry</code></pre>
<p>I hope this script helps you save lots of time during on-demand backup creation. If you have any improvements you would like to see in the script feel free to provide that as a comment here or a pull request on GitHub.</p>]]></description>
<link>https://HarvestingClouds.com/post/automatically-trigger-on-demand-azure-vm-backup-for-multiple-vms-code-sample</link>
<pubDate>Mon, 23 Aug 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Automate Azure Route Table testing using Network Watcher and PowerShell scripting - Code Sample</title>
<description><![CDATA[<p>In any enterprise environment, you will have lots of customer <strong>User Defined Routes</strong> or <strong>UDRs</strong> written within <strong>Azure Route Tables</strong>. You will want to test all of these route tables and check if the &quot;<em>Next Hop</em>&quot; is as per the route tables set by you or not. This could prove to be a tedious task if done manually. I have automated this task using PowerShell scripting. I am sharing the sample in this post. </p>
<p><strong>NOTE</strong>: You should enable Network Watcher in your environment for this script sample to work. Also, note that you need to enable the Network watcher only in the region where your resources exist.</p>
<h2>Script location</h2>
<p>The latest version of the script can be found in GitHub here:
<a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Test-NetworkWatcherRouteNextHop" target="_blank">Test-NetworkWatcherRouteNextHop</a>.</p>
<p>A sample input CSV file is also located at the same location.</p>
<h2>Script Working</h2>
<p>For the script to work, make sure that you have filled the input CSV file as per your environment. In every test, there will be a source and a destination for the communication pair. The script primary looks for the below information from the csv file:</p>
<ol>
<li><strong>SourceSubscription</strong> - The subscription for the source.</li>
<li><strong>SourceRG</strong> - The resource group of the source VM</li>
<li><strong>SourceVMName</strong> - Name of the source VM</li>
<li><strong>Source IP - Indicative</strong> - Actual IP address of the source VM. Please ignore the &quot;indicative&quot; text</li>
<li><strong>DestinationIP-Actual</strong> - Actual IP address of the destination VM</li>
<li><strong>SourceLocation</strong> - Azure region location of the source VM</li>
</ol>
<p><strong>NOTE</strong>: Within the script update the Network Watcher names as per your primary and secondary regions in Azure where &quot;<strong><em>Get-AzNetworkWatcher</em></strong>&quot; cmdlet is used. E.g. to fetch the Network Watcher in the East US 2 region the command used is as below.</p>
<pre><code>$nw = Get-AzNetworkWatcher -Name NetworkWatcher_eastus2 -ResourceGroupName NetworkWatcherRG</code></pre>
<p>The script then fetches the VM using the <strong><em>Get-AzVM</em></strong> cmdlet.</p>
<pre><code>$vm = Get-AzVM -Name $SourceVMName -ResourceGroupName $SourceVMResourceGroupName</code></pre>
<p>Finally, it executes the test by using the <strong><em>Get-AzNetworkWatcherNextHop</em></strong> command.</p>
<pre><code>$nextHop = Get-AzNetworkWatcherNextHop -NetworkWatcher $nw -TargetVirtualMachineId $vm.Id -SourceIPAddress $SourceVMIPAddress -DestinationIPAddress $DestinationIPAddress</code></pre>
<p>To get the results it uses the output of the above command.</p>
<pre><code>$eachVM.NextHopType = $nextHop.NextHopType
$eachVM.IPAddressResult = $nextHop.NextHopIpAddress
$eachVM.RouteTableID = $nextHop.RouteTableId</code></pre>
<p>You can then compare the output with the expected output to check if the test case passed or failed. </p>
<p>If you have any improvements you would like to see in the script feel free to provide that as a comment here or a pull request on GitHub.</p>]]></description>
<link>https://HarvestingClouds.com/post/automate-azure-route-table-testing-using-network-watcher-and-powershell-scripting-code-sample</link>
<pubDate>Wed, 21 Jul 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Block all public access to Azure Storage Accounts - via Azure policy - with complete sample</title>
<description><![CDATA[<p>In the last post, we looked at how to block all public access to Azure Storage Accounts via manual configurations on the Storage Account. You can view that post here: <a href="https://harvestingclouds.com/post/block-all-public-access-to-azure-storage-accounts-via-manual-setting/" target="_blank">Block all public access to Azure Storage Accounts - via manual setting</a>. In this post, we are looking at how to automatically enforce this in your environment not just for existing storage accounts but also for any new storage accounts that will be created in the future. As the title suggests, we are going to use Azure policy to achieve this. </p>
<p>You have <strong>two key options</strong> when it comes to any Azure policy. You can either use it only to audit which resources are not compliant. Or you can enforce the required settings. We ideally want the latter effect, so that if someone changes the settings the settings are auto-corrected.</p>
<h2>Finding the required resources within the policy</h2>
<p>You first need to find the relevant resources. The <strong>two conditions</strong> you need to apply within the policy are:</p>
<ol>
<li>You need to filter for all <strong>Storage Account</strong> resources</li>
<li>Then you need to filter for Storage Accounts for which &quot;<strong>allowBlobPublicAccess</strong>&quot; is not equal to &quot;<strong>false</strong>&quot; i.e. for which the allow blob public access is enabled.</li>
</ol>
<p>For the first condition i.e. to filter by Storage account resource you use the below condition within the policy:</p>
<pre><code>{
          "field": "type",
          "equals": "Microsoft.Storage/storageAccounts"
 }</code></pre>
<p>For the second condition, i.e. to filter for Storage Accounts for which &quot;<strong>allowBlobPublicAccess</strong>&quot; is not equal to &quot;<strong>false</strong>&quot; you can use the below condition:</p>
<pre><code>{
          "not": {
            "field": "Microsoft.Storage/storageAccounts/allowBlobPublicAccess",
            "equals": "false"
          }
}</code></pre>
<h2>Option 1 - Auditing for compliance</h2>
<p>If you want to only Audit these storage accounts for which the allow blob public access is enabled, for the effect you can specify to only &quot;<strong>audit</strong>&quot; as shown below:</p>
<pre><code>  "then": {
    "effect": "audit"
  }</code></pre>
<p>This will only show the non-compliant storage accounts under the compliance report. </p>
<h2>Option 2 - Enforcing the setting (Recommended)</h2>
<p>To go one step further you want to enforce the setting. You have two ways to do this:</p>
<ol>
<li>Deny the operation</li>
<li>Reapply the settings. </li>
</ol>
<p>To deny the operation simply specify the effect as &quot;<strong>deny</strong>&quot; as shown below:</p>
<pre><code>  "then": {
    "effect": "deny"
  }</code></pre>
<p>To reapply the settings automatically you can use the below &quot;<strong>modify</strong>&quot; effect in the policy. You need to specify the guid for the role with which the effect will be applied. And then you specify the &quot;<strong>addOrReplace</strong>&quot; operation to update the setting as shown below:</p>
<pre><code>"then": {
      "effect": "modify",
      "details": {
        "roleDefinitionIds": [
          "/providers/microsoft.authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c"
        ],
        "operations": [
          {
            "operation": "addOrReplace",
            "field": "Microsoft.Storage/storageAccounts/allowBlobPublicAccess",
            "value": false
          }
        ]
      }
    }</code></pre>
<p><strong>NOTE</strong>: You can create Exemptions in the policy if you have a genuine case where the access needs to be opened. E.g. a public static website hosted on the Storage Account.</p>
<h2>Complete Policy Sample GitHub Location</h2>
<p>To find the complete policy sample can be found in the GitHub at this location: <a href="https://github.com/HarvestingClouds/AzurePolicySamples/blob/main/Storage/DisableBlobPublicAccess.json" target="_blank">DisableBlobPublicAccess.json</a></p>
<p><strong>Reference</strong>: <a href="https://docs.microsoft.com/en-us/azure/storage/blobs/anonymous-read-access-prevent" target="_blank">Prevent anonymous public read access to containers and blobs</a></p>]]></description>
<link>https://HarvestingClouds.com/post/block-all-public-access-to-azure-storage-accounts-via-azure-policy-with-complete-sample</link>
<pubDate>Sat, 17 Jul 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Block all public access to Azure Storage Accounts - via manual setting</title>
<description><![CDATA[<p>In the recent past, there have been so many instances where critical data was unintentionally exposed because public access was enabled on the Azure Storage Accounts. Outside people were able to access the blobs and containers publically. Now there is a way to block all public access with just a click of a button.</p>
<h2>Finding the setting - Allow blob public access</h2>
<p>To find this setting, navigate to the storage account for which you want to enable this setting. Then navigate to <strong>Configuration</strong> under Settings. Find the configuration for &quot;<strong><em>Allow Blob public access</em></strong>&quot; and set it to &quot;<strong><em>Disabled</em></strong>&quot; to disable any public blob access.</p>
<img src="/images/1648111273623c2ea98665f.png" alt="Allow Blob public access">
<h2>More ways to secure the storage account</h2>
<p>There are various ways to further lockdown and secure your storage account. Let's look at a few of these.</p>
<h3>Locking the Networking to only selected networks</h3>
<p>One of the most important settings is to update the Storage Account's networking settings to not allow access from &quot;<strong>All Networks</strong>&quot;. You should lock it down to specific networks only or to specified Public IP addresses only.</p>
<img src="/images/1648173827623d2303100ce.png" alt="Storage Networking settings">
<h3>Other security related settings</h3>
<p>Another security-related configuration is to require secure transfer. This means that when the data travels to and from the storage account HTTPS should be used instead of HTTP. In the case of file shares, the encrypted connection should be used as well.</p>
<img src="/images/1648173837623d230d988d3.png" alt="Other Storage Configurations">
<h2>Next Steps</h2>
<p>In the next post, we will look at automatically enforcing blocking of the public access to storage accounts across your subscription with Azure policies. We will have this policy apply for any existing and new Storage Accounts. You can view that post here: <a href="https://harvestingclouds.com/post/block-all-public-access-to-azure-storage-accounts-via-azure-policy-with-complete-sample/" target="_blank">Block all public access to Azure Storage Accounts - via Azure policy - with complete sample</a>.</p>
<p><strong>Reference</strong>: <a href="https://docs.microsoft.com/en-us/azure/storage/blobs/anonymous-read-access-prevent" target="_blank">Prevent anonymous public read access to containers and blobs</a></p>]]></description>
<link>https://HarvestingClouds.com/post/block-all-public-access-to-azure-storage-accounts-via-manual-setting</link>
<pubDate>Thu, 15 Jul 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Getting a report for all VMs and it's related OS and Data Disks - Code Sample</title>
<description><![CDATA[<p>In the past, I wrote a script sample to output all VMs related reports. Now, this report is easily available in the Azure portal when you are viewing the VMs. You can even apply any filters and add/remove any columns for more data. Then you can download the csv for the data you see. </p>
<p>One thing that is not available is a <strong>report of VMs and their related OS and Data disks</strong>. You may need this sometime as per your requirements. In this post, I am sharing one such script that pulls a report around all VMs and their all related disks.</p>
<h2>Script location</h2>
<p>The latest version of the script can be found in GitHub here:
<a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Get-AzResourceInfo-VMsAndAllRelatedDisks" target="_blank">Get-AzResourceInfo-VMsAndAllRelatedDisks</a></p>
<p>A sample output CSV file and a sample excel report (based on the output csv) are also located at the same location.</p>
<h2>Script Working</h2>
<p>The script selects all subscriptions and iterates over those. Then within each subscription, it fetches all Resource Groups and iterates over those. The script also indicates a way for filtering over Resource Groups (while fetching these) by using wildcards.</p>
<p>For each Resource Group, it fetches all resources and iterates over these. Then it filters out the resources of type virtual machines. Specifically, it uses the filter to match the type to &quot;<strong><em>Microsoft.Compute/virtualMachines</em></strong>&quot;.</p>
<pre><code>$resource.Type -eq "Microsoft.Compute/virtualMachines"</code></pre>
<p>Then for every VM, it fetches three different types of information and adds each as a separate row to the output CSV:</p>
<ol>
<li>The VM itself</li>
<li>OS disk details</li>
<li>Data disk details - one-row entry for each data disk</li>
</ol>
<p>The script uses a value/column for &quot;attachedToVM&quot; for all types of rows to indicate which VM the entry belongs to. The script also outputs &quot;<strong>ResourceType</strong>&quot; column that you can further use to filter data. </p>
<p>Now that you have the data, you can create custom reports as per your requirements. One such report is provided in the same location for your easy reference.</p>
<p>If you have any improvements you would like to see in the script feel free to provide that as a comment here or a pull request on GitHub.</p>]]></description>
<link>https://HarvestingClouds.com/post/getting-a-report-for-all-vms-and-its-related-os-and-data-disks-code-sample</link>
<pubDate>Thu, 15 Jul 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Restricting resource types from creation in Azure via Azure Policy</title>
<description><![CDATA[<p>As a best practice, you should restrict the <strong>type of resources</strong> that can be created in your environment. Even if these are required these should be created by creating an exemption in the policy temporarily. What this does is, it restricts end-users from accidentally creating those resources and disrupting your environment. </p>
<h2>Resource Types to restrict</h2>
<p>One such example is <strong>Route Tables</strong>. If someone in your organization with contributor rights creates a Route Table that is applied on a critical subnet then that user can divert all traffic from that subnet. </p>
<p><strong>Public IP Addresses</strong> are another big one that you want to restrict. Any public IP in your environment needs to have a valid business justification. These are the connection to the outside world and therefore you need to ensure that if one exists in your environment then that has proper safeguards (like Firewall) etc. in place. </p>
<p>Classic resources should also be limited. You want to conform to the latest standards (ARM-based) and stay away from classic compute and storage resources within Azure.</p>
<p>We recommend the below resource types to consider restricting in your environment as a starting place. You can then extend these and add/remove any types as per your requirements:</p>
<ul>
<li>Route Tables</li>
<li>Public IP addresses</li>
<li>DNS Zones</li>
<li>Network Watchers</li>
<li>Search Services</li>
<li>Bastion Hosts</li>
<li>Elastic Pools</li>
<li>Classic Compute</li>
<li>Classic Storage</li>
</ul>
<h2>Policy Sample</h2>
<p>The below policy sample implements the restrictions and limits the resource types (that we discussed above) from being created at all. If any of the mentioned resource type is created then the policy denies that resource creation operation.</p>
<pre><code>{
    "mode": "All",
    "policyRule": {
      "if": {
        "anyOf": [
          {
            "field": "type",
            "like": "Microsoft.Network/dnszones*"
          },
          {
            "field": "type",
            "like": "Microsoft.Network/networkWatchers*"
          },
          {
            "field": "type",
            "like": "Microsoft.Search/searchServices*"
          },
          {
            "field": "type",
            "like": "Microsoft.Network/publicIPAddresses*"
          },
          {
            "field": "type",
            "like": "Microsoft.Network/bastionHosts*"
          },
          {
            "field": "type",
            "like": "Microsoft.Sql/servers/elasticpools*"
          },
          {
            "field": "type",
            "like": "Microsoft.Network/routeTables*"
          },
          {
            "field": "type",
            "like": "Microsoft.ClassicCompute/*"
          },
          {
            "field": "type",
            "like": "Microsoft.ClassicStorage/*"
          }
        ]
      },
      "then": {
        "effect": "deny"
      }
    },
    "parameters": {}
  }</code></pre>
<h2>Additional Considerations - restricting VM SKUs</h2>
<p>In addition to restricting the resource types, you should also restrict VM SKUs. A SKU from the high performance computing series like HBv3-series can eat up your budget very fast. So as a good practice you should be restricting the SKUs to only the ones that you allow. You will need to maintain the list and update it as different projects require additional SKUs or want to upgrade to the latest series. But that is easily done and hardly takes any time. The benefits far outweigh this one small downside.</p>
<h2>Complete Policy Samples on GitHub</h2>
<p>You can find the complete policy samples on the GitHub in my policy samples repository here: <a href="https://github.com/HarvestingClouds/AzurePolicySamples/tree/main/Enforcing%20Standards" target="_blank">AzurePolicySamples - Enforcing Standards</a>.</p>]]></description>
<link>https://HarvestingClouds.com/post/restricting-resource-types-from-creation-in-azure-via-azure-policy</link>
<pubDate>Wed, 23 Jun 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Enforcing Naming Convention for Resource Group via Azure Policy</title>
<description><![CDATA[<p>Having the right naming convention is very important for your environment. If you already have a naming convention defined then you should try to enforce it in your environment. One such way to do this is via Azure policy. In this post, we will look at how we can do this with a policy sample.</p>
<h2>Sample naming convention</h2>
<p>Let's assume that the naming convention for the Resource Groups has the below features/restrictions:</p>
<ul>
<li>The name should start with &quot;RG-&quot;</li>
<li>The name should end with the environment i.e. dev, test, or prod</li>
<li>The name should also end with the location i.e. US East, US West, etc.</li>
</ul>
<p>E.g. the name should be something like RG-AccoutingRecordsApp-Dev-USE to denote that the resource group is for the accounting department for some &quot;records app&quot;. This resource group is in a dev environment and within the US East location.</p>
<h2>Enforcing the naming convention</h2>
<p>To enforce the naming convention we will have negative conditions so that later we can apply the deny operation to stop the resource group from even creating. Also note that all of the conditions should be met. If any of the conditions is not met then the resource group creation will fail.</p>
<p>Policy sample:</p>
<pre><code>{
    "mode": "All",
    "policyRule": {
      "if": {
        "allOf": [
          {
            "field": "type",
            "equals": "Microsoft.Resources/subscriptions/resourceGroups"
          },
          {
            "anyOf": [
              {
                "field": "name",
                "notLike": "RG-*"
              },
              {
                "allOf": [
                  {
                    "field": "name",
                    "notLike": "*-DEV-USE"
                  },
                  {
                    "field": "name",
                    "notLike": "*-DEV-USW"
                  },
                  {
                    "field": "name",
                    "notLike": "*-TEST-USE"
                  },
                  {
                    "field": "name",
                    "notLike": "*-TEST-USW"
                  },
                  {
                    "field": "name",
                    "notLike": "*-PROD-USE"
                  },
                  {
                    "field": "name",
                    "notLike": "*-PROD-USW"
                  }
                ]
              }
            ]
          }
        ]
      },
      "then": {
        "effect": "deny"
      }
    },
    "parameters": {}
  }</code></pre>
<h2>Complete Policy Samples on GitHub</h2>
<p>You can find the complete policy samples on the GitHub in my policy samples repository here: <a href="https://github.com/HarvestingClouds/AzurePolicySamples/tree/main/Enforcing%20Standards" target="_blank">AzurePolicySamples - Enforcing Standards</a>.</p>]]></description>
<link>https://HarvestingClouds.com/post/enforcing-naming-convention-for-resource-group-via-azure-policy</link>
<pubDate>Fri, 18 Jun 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Enforcing resources to have the same location as the containing resource group via Azure Policy</title>
<description><![CDATA[<p>In the previous post, we saw how to restrict the locations of resources and resource groups. You can view that post here: <a href="https://harvestingclouds.com/post/enforcing-location-restrictions-in-your-environment-via-azure-policy/" target="_blank">Enforcing location restrictions in your environment via Azure Policy
</a>. In this post, we will look at how to enforce the resources to adhere to the location of the containing resource groups. If a resource group is in the US East location then all the resources within that resource group can be enforced to be in the same location i.e. US East.</p>
<h2>Implementing restrictions via Azure Policy</h2>
<p>While trying to implement this restriction we need to consider the fact that the resource groups can only be in a geographical location while the resources have a special location called &quot;global&quot;. So the restrictions will have:</p>
<ul>
<li>The location field should be equal to the resource group's location. You can compare by using the function: <strong><em>resourceGroup().location</em></strong></li>
<li>The location field can be equal to &quot;<em>global</em>&quot;</li>
</ul>
<p>Policy sample:</p>
<pre><code>{
    "mode": "Indexed",
    "policyRule": {
      "if": {
        "allOf": [
          {
            "field": "location",
            "notEquals": "[resourceGroup().location]"
          },
          {
            "field": "location",
            "notEquals": "global"
          }
        ]
      },
      "then": {
        "effect": "deny"
      }
    },
    "parameters": {}
  }</code></pre>
<p>Note that the policy has negative conditions as the effect is denying the resource creation/update operation.</p>
<h2>Complete Policy Samples on GitHub</h2>
<p>You can find the complete policy samples on the GitHub in my policy samples repository here: <a href="https://github.com/HarvestingClouds/AzurePolicySamples/tree/main/Location%20Restrictions" target="_blank">AzurePolicySamples - Location Restrictions</a>.</p>]]></description>
<link>https://HarvestingClouds.com/post/enforcing-resources-to-have-the-same-location-as-the-containing-resource-group-via-azure-policy</link>
<pubDate>Sat, 12 Jun 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Enforcing location restrictions in your environment via Azure Policy</title>
<description><![CDATA[<p>You want to always deploy resources in a limited set of locations within Microsoft Azure that makes sense to your business. Usually, this decision is based on the following criteria:</p>
<ul>
<li>The primary location within Azure that is closest to you</li>
<li>Optional - The location that provides the best cost for the Express Route, closest to you and based on your preferred provider</li>
<li>The secondary location that is usually a paired region for the primary location</li>
<li>Additional locations based on your extended offices or different geographical branches</li>
</ul>
<p>Once you have defined the regions where your resources will be deployed, you want to lock this down via Azure policies so that even by accident, no one deploys resources to any other region. This helps with governance and supportability in the longer run. Let's look at how to apply this next. For this post, let's assume that the locations selected are East US and West US i.e. the resources can only be deployed within these two regions.</p>
<h2>Location restrictions at the Resource Group level</h2>
<p>For the resource group level you want to have the following conditions and then <strong>deny</strong> the operations if true:</p>
<ol>
<li>The type of the resource is the Resource Group</li>
<li>The field for &quot;location&quot; is not in the allowed list of locations.</li>
</ol>
<p>The &quot;<strong>allOf</strong>&quot; operator before all these conditions ensure that all the conditions should be met at the same time.</p>
<p>Policy sample: </p>
<pre><code>{
    "mode": "All",
    "policyRule": {
      "if": {
        "allOf": [
          {
            "field": "type",
            "equals": "Microsoft.Resources/subscriptions/resourceGroups"
          },
          {
            "field": "location",
            "notIn": [
              "eastus",
              "westus"
            ]
          }
        ]
      },
      "then": {
        "effect": "deny"
      }
    },
    "parameters": {}
  }</code></pre>
<h2>Location restrictions at the Resource level</h2>
<p>The one main caveat with some resources is that they are not deployed in a specific geographic region. They have a special region named &quot;<strong>global</strong>&quot; where such resources are deployed. We will factor this into the list of allowed locations when defining the policy. </p>
<p>Also we don't need to specify the type, which ensures that this policy is applied to all resource types.</p>
<p>Policy sample: </p>
<pre><code>{
    "mode": "Indexed",
    "policyRule": {
      "if": {
        "not": {
          "field": "location",
          "in": [
            "eastus",
            "westus",
            "global"
          ]
        }
      },
      "then": {
        "effect": "deny"
      }
    },
    "parameters": {}
  }</code></pre>
<h3>Additional considerations</h3>
<p>You also want to sometimes restrict the resources to the location of the resource group where they are deployed. We will be looking at this in our next blog post.</p>
<h2>Complete Policy Samples on GitHub</h2>
<p>You can find the complete policy samples on the GitHub in my policy samples repository here: <a href="https://github.com/HarvestingClouds/AzurePolicySamples/tree/main/Location%20Restrictions" target="_blank">AzurePolicySamples - Location Restrictions</a>.</p>]]></description>
<link>https://HarvestingClouds.com/post/enforcing-location-restrictions-in-your-environment-via-azure-policy</link>
<pubDate>Fri, 04 Jun 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Key Vault service SLA raised to 99.99% from 99.9%</title>
<description><![CDATA[<p><strong>Azure Key Vault</strong> service is now even more resilient. The SLA for this service has been raised to <strong>99.99%</strong>. Earlier this SLA was only 99.9%.</p>
<p>With this update, Azure Key Vault guarantees that valid Key Vault requests will have availability 99.99% of the time. </p>
<p>Let's understand what that change translates to. The permissible <strong>downtime or unavailability time</strong> has been reduced as follows.</p>
<ul>
<li><strong>Daily</strong>: From 1m 26s to 8s</li>
<li><strong>Weekly</strong>: From 10m 4s to 1m</li>
<li><strong>Monthly</strong>: From 43m 49s to 4m 22s</li>
<li><strong>Quarterly</strong>: From 2h 11m 29s to 13m 8s</li>
<li><strong>Yearly</strong>: From 8h 45m 56s to 52m 35s</li>
</ul>
<p>As you can see from the above numbers, even if the SLA doesn't look much bigger change, it actually is a very big difference in the allowed downtime.</p>
<p>Reference: <a href="https://azure.microsoft.com/en-us/updates/akv-sla-raised-to-9999/" target="_blank">Azure Key Vault SLA raised to 99.99%</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-key-vault-service-sla-raised-to-9999-from-999</link>
<pubDate>Sat, 22 May 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Site Recovery now supports cross-continental disaster recovery - although only for 3 region pairs</title>
<description><![CDATA[<p>If you even had to create a strategy for your organization around Business Continuity/Disaster Recovery in Azure for Virtual Machines, the key product/offering you leverage is Azure Site Recovery or ASR. It actively replicates the data across different region and provides ability to failover in case of a disaster. </p>
<p>Azure Site Recovery now supports cross-continental disaster recovery giving a new meaning to failover. This feature is available for 3 region pairs. These pairs are:</p>
<ol>
<li>Southeast Asia and Australia East</li>
<li>Southeast Asia and Australia Southeast</li>
<li>West Europe and South Central US</li>
</ol>
<p>This means that a workload running in Southeast Asia can be replicated to Australia East and failed over across continents in case of a disaster. </p>
<p>Official announcement link: <a href="https://azure.microsoft.com/en-us/updates/asr-cross-continental-dr/" target="_blank">Azure Site Recovery now supports cross-continental disaster recovery for 3 region pairs</a>.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-site-recovery-now-supports-cross-continental-disaster-recovery-although-only-for-3-region-pairs</link>
<pubDate>Tue, 11 May 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Log analytics workspace name uniqueness is now per resource group</title>
<description><![CDATA[<p><strong>Azure monitor log analytics workspace name</strong> needed to be unique globally so far (similar to storage account names). Microsoft has implemented changes on the backend around how the names are handled in the background. The main change that is an outcome of this is that the log analytics workspace name no longer needs to be globally unique. Instead, it needs to be unique only within a resource group level.</p>
<p>This provides a lot more flexibility in providing whatever names you want to assign to the log analytics workspace and following your naming standards without worrying that someone else across the globe could be using the same name. </p>
<h2>Workspace uniqueness updates</h2>
<p><strong>Workspace uniqueness</strong> is maintained as follow:</p>
<ul>
<li><strong>Workspace ID</strong> – global uniqueness remained unchanged.</li>
<li><strong>Workspace resource ID</strong> – also needs to be globally unique.</li>
<li><strong>Workspace name</strong> – needs to be unique per resource group.</li>
</ul>
<h2>Cross workspace queries</h2>
<p>Cross workspace queries should now reference workspaces by either: </p>
<ul>
<li>Qualified name, or</li>
<li>Workspace ID, or</li>
<li>Azure Resource ID</li>
</ul>
<p>Cross workspace queries referenced by resource name (i.e. the workspace name) will fail due to ambiguous names when you have multiple workspaces with that name.</p>
<p><strong>Official update link</strong>: <a href="https://azure.microsoft.com/en-us/updates/general-availability-log-analytics-workspace-name-uniqueness-is-now-per-resource-group/" target="_blank">Log analytics workspace name uniqueness is now per resource group</a>.</p>]]></description>
<link>https://HarvestingClouds.com/post/log-analytics-workspace-name-uniqueness-is-now-per-resource-group</link>
<pubDate>Wed, 05 May 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Bastion Series - Index</title>
<description><![CDATA[<p>This is the <strong>Index</strong> for the series of blog posts regarding the <strong>Azure Bastion service</strong>.</p>
<p><strong>Note</strong> that this Index is updated regularly as more posts are added around this topic.</p>
<ol>
<li><a href="https://harvestingclouds.com/post/simplifying-azure-bastion-1-what-is-azure-bastion/" target="_blank">What is Azure Bastion</a></li>
<li><a href="https://harvestingclouds.com/post/simplifying-azure-bastion-2-create-an-azure-bastion-host/" target="_blank">Creating an Azure Bastion Host</a></li>
<li><a href="https://harvestingclouds.com/post/simplifying-azure-bastion-3-connecting-to-windows-vms-using-azure-bastion/" target="_blank">Connecting to Windows VMs using Azure Bastion</a></li>
<li><a href="https://harvestingclouds.com/post/simplifying-azure-bastion-4-connecting-to-linux-vms-using-azure-bastion-over-ssh/" target="_blank">Connecting to Linux VMs using Azure Bastion</a></li>
<li><a href="https://harvestingclouds.com/post/simplifying-azure-bastion-5-connecting-to-vm-scale-sets-vmss-using-azure-bastion/" target="_blank">Connecting to VM Scale Sets (VMSS) using Azure Bastion</a></li>
<li><a href="https://harvestingclouds.com/post/simplifying-azure-bastion-6-clipboard-access-and-going-full-screen/" target="_blank">Clipboard access and going full screen</a></li>
<li><a href="https://harvestingclouds.com/post/simplifying-azure-bastion-7-nsg-and-firewall-configurations/" target="_blank">NSG and Firewall configurations</a></li>
<li><a href="https://harvestingclouds.com/post/simplifying-azure-bastion-8-managing-azure-bastion-session-management-monitoring-and-alerting/" target="_blank">Managing Azure Bastion - Session Management, Monitoring and Alerting</a></li>
</ol>]]></description>
<link>https://HarvestingClouds.com/post/azure-bastion-series-index</link>
<pubDate>Sun, 25 Apr 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Simplifying Azure Bastion - 8 Managing Azure Bastion - Session Management, Monitoring and Alerting</title>
<description><![CDATA[<p>This blog is a part of the Azure Bastion series. You can find the Index of this series here: <a href="https://harvestingclouds.com/post/azure-bastion-series-index/" target="_blank">Azure Bastion Series</a>.</p>
<p>In the previous few posts, we looked at various aspects of the Azure Bastion service. In this post, we will look at various monitoring and alerting capabilities related to Azure Bastion. </p>
<h3>Monitoring remote sessions</h3>
<p>You can monitor all the remote sessions being facilitated by Azure Bastion. To do this navigate to the Azure Bastion host. Click on the &quot;Sessions&quot; under settings. On the right side, you will be able to view all the active sessions. Click on the Refresh button to update the list with any added or removed session.</p>
<p>Note that in the list you will see many pieces of valuable information. Few important ones are:</p>
<ol>
<li>Target Host Name - Hostname of the VM that has an active session.</li>
<li>UserName - User name that was used to connect to the VM.</li>
<li>Protocol - RDP or SSH protocol, that is used to connect to the target VMs (by the Bastion host). It is ssh for Linux and RDP for Windows VMs.</li>
</ol>
<img src="/images/16172197836064d0c7aa98f.png" alt="Monitoring remote sessions">
<h3>Managing remote sessions - Deleting the sessions</h3>
<p>If required, you can delete any session from the list of active sessions. To do this simply right-click on the session name in the list and click on the Delete in the pop-up menu.</p>
<img src="/images/16172197906064d0cedd0d8.png" alt="Option to delete a session">
<p>The person connected to the VM via Azure Bastion will see a prompt similar to the below. Even after a reconnection attempt from this prompt the connection won't be successful. If the end-user wants to connect again then they will have to initiate the connection again from the Azure portal.</p>
<img src="/images/16172197976064d0d59163b.png" alt="Disconnected session">
<h3>Monitoring Azure Bastion</h3>
<p>The monitoring capabilities are integrated with Azure Monitor. Navigate to the <strong>Monitor</strong> service in the Azure portal and then click on the &quot;<strong>Metrics</strong>&quot; option in the left settings pane. On the right side, set the Scope to the Azure Bastion host, that you want to monitor. Next, select one of the available metrics from the list of Metric dropdown. </p>
<p>The options available for Metric are:</p>
<ol>
<li>Availability - Bastion communication status </li>
<li>Saturation - Total memory</li>
<li>Saturation - Used CPU</li>
<li>Saturation - Used memory</li>
<li>Traffic - Session count</li>
</ol>
<p>The one shown in the screenshot below is for &quot;Traffic - <strong>Session count</strong>&quot;.</p>
<img src="/images/16172198356064d0fb2edca.png" alt="Metrics for Bastion Monitoring">
<h3>Creating Alerts on Azure Bastions</h3>
<p>From the Monitor, you can also create Alerts related to Azure Bastion. This experience is also similar to creating an alert for any other service in Azure. To create a new alert, click on the &quot;+ New alert rule&quot; button under the Alert section.</p>
<p>In the &quot;Create alert rule&quot;, select the scope and set it to the Azure Bastion service. Then click on the &quot;Add condition&quot; button. This will bring a pop-up blade to &quot;Configure signal logic&quot;. This is the condition on which the alert will be triggered. Check all the signals available here.</p>
<p>Additionally, you can select or create an Action Group to send notifications and take actions whenever an alert is triggered.</p>
<img src="/images/16172198416064d101ed680.png" alt="Alerts for Bastion">]]></description>
<link>https://HarvestingClouds.com/post/simplifying-azure-bastion-8-managing-azure-bastion-session-management-monitoring-and-alerting</link>
<pubDate>Sat, 24 Apr 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Simplifying Azure Bastion - 7 NSG and Firewall configurations</title>
<description><![CDATA[<p>This blog is a part of the Azure Bastion series. You can find the Index of this series here: <a href="https://harvestingclouds.com/post/azure-bastion-series-index/" target="_blank">Azure Bastion Series</a>.</p>
<p>In the earlier posts, we saw how to set up the Azure Bastion service and how to connect to VMs using this service. In this post, we will see what ports are involved and on which side to allow this communication. That is which ports are involved in the outbound communication between the target VMs and Bastion host and which ports are involved in the inbound communication. You will need this information when dealing with any NSGs or Firewalls.</p>
<h3>Key Points</h3>
<p>Azure Bastion is a fully managed platform PaaS service from Azure that is hardened internally to provide you with secure RDP/SSH connectivity. You don't need to apply any NSGs on the Azure Bastion subnet. Because Azure Bastion connects to your virtual machines over private IP, you can configure your NSGs to allow RDP/SSH from Azure Bastion only. This removes the hassle of managing NSGs each time you need to securely connect to your virtual machines.</p>
<p><strong>Note</strong>: UDR is not supported on an Azure Bastion subnet. You don’t need to force traffic from an Azure Bastion subnet to Azure Firewall because the communication between Azure Bastion and your VMs is private. Therefore unless you have a complex deployment you will need to define the rules only at the NSG levels.</p>
<h4>Inbound rules for the Azure Bastion Subnet</h4>
<p><strong>Note</strong>: These are the rules for the <strong>Inbound</strong> or <strong>Ingress</strong> communication for the Azure Bastion host. These are <strong>applied to</strong> the NSG for the Bastion subnet.</p>
<p>Navigate to the &quot;Inbound security rules&quot; in the NSG and click on the &quot;+ Add&quot; button to add individual rules. As a best practice, leave a range of 10 or higher when providing priority to the rules. Ensure that the order is correct and in the required sequence. </p>
<p>Add <strong>the rules</strong> for the following:</p>
<ol>
<li>Inbound on 443 from the Internet - Allow HTTPS inbound on port 443 for TCP protocol. Use the service tag of &quot;Internet&quot; for the Source and &quot;Any&quot; for the destination. This rule is required for you to be able to make HTTPS connections to the Azure Bastion host from the Azure portal.</li>
<li>Azure Bastion control plane - Allow 443 for TCP protocol. Use the service tag of &quot;GatewayManager&quot; for the Source and &quot;Any&quot; for the destination. This rule is required for the Azure Bastion control plane, i.e. Gateway Manager to be able to talk to Azure Bastion</li>
<li>Azure Bastion data plane communication - Allow 443 for TCP protocol. Use the service tag of &quot;AzureLoadBalancer&quot; for the Source and &quot;Any&quot; for the destination. This rule allows communication between the underlying components of Azure Bastion.</li>
<li>Azure Load Balancer Health probes - Allow 5701 and 8080 for &quot;Any&quot; protocol. Use the service tag of &quot;VirtualNetwork&quot; for the Source and also &quot;VirtualNetwork&quot; for the destination. This rule enables Azure Load Balancer to detect connectivity.</li>
</ol>
<img src="/images/16172164676064c3d372f83.png" alt="Inbound for the Azure Bastion Subnet">
<h4>Outbound rules for the Azure Bastion Subnet</h4>
<p><strong>Note</strong>: These are the rules for the <strong>Outbound</strong> or <strong>Egress</strong> communication for the Azure Bastion host. These are <strong>applied to</strong> the NSG for the Bastion subnet.</p>
<p>Navigate to the &quot;Outbound security rules&quot; in the NSG and click on the &quot;+ Add&quot; button to add individual rules. As a best practice, leave a range of 10 or higher when providing priority to the rules. Ensure that the order is correct and in the required sequence. </p>
<p>Add <strong>the rules</strong> for the following:</p>
<ol>
<li>Traffic to target VMs - Allow SSH and RDP outbound on ports 22 and 3389 for Any protocol. Use the &quot;Any&quot; for the Source and the service tag of &quot;VirtualNetwork&quot; for the destination. This rule allows Bastion to be able to connect to target VMs for SSH and RDP connectivity.</li>
<li>Azure Bastion data plane communication - Allow outbound on ports 443 for TCP protocol. Use the &quot;Any&quot; for the Source and the service tag of &quot;AzureCloud&quot; for the destination. This rule allows outbound communication for the components of Azure Bastion to talk to each other.</li>
<li>Azure Cloud communication - Allow outbound on ports 5701 and 8080 for Any protocol. Use the &quot;VirtualNetwork&quot; for the Source and also the service tag of &quot;VirtualNetwork&quot; for the destination. This rules is required for Azure Bastion to send diagnostics logs, metering logs, and other information to various public endpoints within Azure cloud.</li>
<li>Internet communication - Allow outbound on port 80 for Any protocol. Use the &quot;Any&quot; for the Source and the service tag of &quot;Internet&quot; for the destination. This rule is required for session and certificate validation.</li>
</ol>
<img src="/images/16172164756064c3db1bc20.png" alt="Outbound for the Azure Bastion Subnet">
<h4>Inbound rules for the Target VM's NSG</h4>
<p><strong>Note</strong>: These are the rules for the <strong>Inbound</strong> or <strong>Ingress</strong> communication for the Target VM that you will RDP to via the Bastion host. These are <strong>applied to</strong> the NSG for the subnet of the Target VM or directly to the network interface card of the VM. </p>
<p>You will need to allow ports 3389 (RDP) and 22 (SSH) inbound on the target VM. This needs to be allowed for the Bastion host to be able to make the RDP and SSH connections respectively to the target VM.</p>
<p><strong>Note</strong>  </p>
<ol>
<li>You don't need both ports i.e. 3389 (RDP) and 22 (SSH). You should limit it to either one based on the operating system within the VM. For Windows VMs you will select the 3389 port and for the Linux VMs, you will select port number 22. This is shown as number 4 in the below screenshot.</li>
<li>You should update the Source IP address range to the address range of the Bastion subnet i.e. &quot;<strong>AzureBastionSubnet</strong>&quot; subnet only. This is shown as number 5 in the below screenshot.</li>
</ol>
<img src="/images/16172164676064c3d372f83.png" alt="Inbound for the Target VM">
<p>You can read more about these in the official documentation here: <a href="https://docs.microsoft.com/en-us/azure/bastion/bastion-nsg" target="_blank">Working with NSG access and Azure Bastion</a></p>]]></description>
<link>https://HarvestingClouds.com/post/simplifying-azure-bastion-7-nsg-and-firewall-configurations</link>
<pubDate>Tue, 20 Apr 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Simplifying Azure Bastion - 6 Clipboard access and going Full screen</title>
<description><![CDATA[<p>This blog is a part of the Azure Bastion series. You can find the Index of this series here: <a href="https://harvestingclouds.com/post/azure-bastion-series-index/" target="_blank">Azure Bastion Series</a>.</p>
<p>When working with Azure Bastion you connect to various VMs via the Bastion host. For Windows VMs, you connect via RDP and for Linux VMs you connect via SSH. In both scenarios, you will need to copy and paste the text from the local machine to the remote VM. In Bastion this is managed via the Clipboard manager on the Bastion host.</p>
<p>Before you start make sure that you know how to connect to the VMs and various caveats linked to that as we discussed in the earlier posts.</p>
<h3>Clipboard Access</h3>
<p>When trying to connect for the first time or trying to connect on a new browser, the Bastion service will request you to allow the Text and Image related clipboard access. I highly recommend allowing this. If you opt to not allow this access then later, you won't be able to share any clipboard with the session.</p>
<p><strong>Note</strong>: Only text copy/paste is supported.</p>
<img src="/images/16171550566063d3f05a651.png" alt="Allow Clipboard access">
<p>When you are in the remote session, launch the Bastion clipboard access tool palette by selecting the two arrows on the left-center of the session.</p>
<img src="/images/1617171691606414ebdf8a2.png" alt="Pop out option">
<p>In the pop-out window, any text you have copied on the local machine will automatically appear in the text box in this window. You can review or modify the text in this window and the clipboard in the remote VM will be updated automatically. </p>
<img src="/images/1617171699606414f37cae9.png" alt="Clipboard">
<h3>Full Screen</h3>
<p>Another very useful option that is available in the pop-out option is to go full screen. Just click on the button as indicated below to enter the full-screen mode. This is recommended when you have to work on the remote VM for a long period of time.</p>
<img src="/images/1617171707606414fb514f6.png" alt="Full Screen Option">
<p>Press the Escape button to exit the Full Screen.</p>]]></description>
<link>https://HarvestingClouds.com/post/simplifying-azure-bastion-6-clipboard-access-and-going-full-screen</link>
<pubDate>Sun, 18 Apr 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Simplifying Azure Bastion - 5 Connecting to VM Scale Sets (VMSS) using Azure Bastion</title>
<description><![CDATA[<p>This blog is a part of the Azure Bastion series. You can find the Index of this series here: <a href="https://harvestingclouds.com/post/azure-bastion-series-index/" target="_blank">Azure Bastion Series</a>.</p>
<p>In the previous two posts, we saw how to RDP to a Windows VM and how to SSH to a Linux VM. In this post, we will see how to connect to the instances of a Virtual Machine Scale Set (VMSS). Note that in the case of VMSS, Bastion will automatically decide whether to use RDP or SSH based on the operating system of the instance.</p>
<p>To start, navigate to your Virtual Machine Scale Set (VMSS) and click on the Instances. Select the instance to which you want to connect.</p>
<img src="/images/16171677706064059a3857c.png" alt="Selecting VMSS Instance">
<p>To connect to the VM via Bastion, first, navigate to the Linux VM in Azure. Click on the Connect button. In the pop-out menu, click on the Bastion option.</p>
<img src="/images/1617167778606405a231982.png" alt="Bastion Connectivity option">
<p>Next, click on the Use Bastion button on the next screen.</p>
<img src="/images/1617167785606405a9a1532.png" alt="Use Bastion">
<p>In the &quot;Connect using Bastion&quot; screen, note the Bastion host being used to connect. This is only for information. You may need this information to troubleshoot if you run into any issues.</p>
<p>Next, I recommend leaving the check box for &quot;Open in new window&quot; checked. This ensures that the connection opens in a new tab or window and uses the maximum area. If you uncheck this then the connection will open in a new blade to the right and the view will be limited.</p>
<p>Provide the credentials of the instance that you are trying to connect. </p>
<p>Hit the Connect button to connect to the instance.</p>
<img src="/images/1617167792606405b0c6bf2.png" alt="Connect using Bastion details">
<p>A new window/tab will open in the browser with the VM connected.</p>
<img src="/images/1617167800606405b826bb3.png" alt="Connected VM">]]></description>
<link>https://HarvestingClouds.com/post/simplifying-azure-bastion-5-connecting-to-vm-scale-sets-vmss-using-azure-bastion</link>
<pubDate>Sat, 17 Apr 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Simplifying Azure Bastion - 4 Connecting to Linux VMs using Azure Bastion over SSH</title>
<description><![CDATA[<p>This blog is a part of the Azure Bastion series. You can find the Index of this series here: <a href="https://harvestingclouds.com/post/azure-bastion-series-index/" target="_blank">Azure Bastion Series</a>.</p>
<p>In the previous blog, we saw how to RDP to a Windows VM using Bastion. In this post, we will look at how to SSH to a Linux VM using the Bastion service.</p>
<p>To connect to the VM via Bastion, first, navigate to the Linux VM in Azure. Click on the Connect button. In the pop-out menu, click on the Bastion option.</p>
<img src="/images/16171654566063fc90b1e71.png" alt="Bastion Option">
<p>Next, click on the Use Bastion button on the next screen.</p>
<img src="/images/16171654646063fc98dfab9.png" alt="Use Bastion button">
<p>In the next screen to &quot;<strong>Connect using Azure Bastion</strong>&quot; make note of the following:</p>
<ol>
<li>At the top you will see the Bastion host being used for the connection. This is for your information only. You do not need to take any action here. You may need this if you run into any issues and have to troubleshoot.</li>
<li>I recommend leaving the check box for &quot;Open in new window&quot; checked. This ensures that the connection opens in a new tab or window and uses the maximum area. If you uncheck this then the connection will open in a new blade to the right and the view will be limited.</li>
<li>Next provide the Username that will be used to connect via SSH to the Linux VM</li>
<li>Authentication type is important. Select the one that you want to leverage. The next entry will update as per the selection here.</li>
<li>Provide the additional information for the selection you made in the previous point. </li>
<li>The advanced settings are optional and can be skipped. This does not relate to all authentication types.</li>
<li>Finally click on the Connect button to connect to the VM</li>
</ol>
<p>The options for <strong>authentication types</strong> are as follows:</p>
<ol>
<li>Password - This is to use the normal username and password-based authentication</li>
<li>SSH Private Key - This option is used when you want to provide the key phrase manually</li>
<li>SSH Private Key from Local File - This is to upload the private key file from the local computer. You can optionally provide a passphrase for this.</li>
<li>SSH Private Key from Azure Key Vault - This is to use a private key file that you have previously uploaded to the Azure Key Vault.</li>
</ol>
<img src="/images/16171654726063fca09b5cc.png" alt="Connecting via Bastion">
<p>Once you click on the <strong>Connect</strong> button, the SSH session to the VM opens up in a new tab.</p>
<img src="/images/16171655386063fce2774b2.png" alt="Connected VM">
<p><strong>Note</strong>: Almost all the Caveats from the previous blog for RDP to Windows VM also apply to the SSH to Linux VMs.</p>]]></description>
<link>https://HarvestingClouds.com/post/simplifying-azure-bastion-4-connecting-to-linux-vms-using-azure-bastion-over-ssh</link>
<pubDate>Sat, 10 Apr 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Simplifying Azure Bastion - 3 Connecting to Windows VMs using Azure Bastion</title>
<description><![CDATA[<p>This blog is a part of the Azure Bastion series. You can find the Index of this series here: <a href="https://harvestingclouds.com/post/azure-bastion-series-index/" target="_blank">Azure Bastion Series</a>.</p>
<p>In the last post, we deployed the Azure Bastion service. In this post, we will look at how to connect to a Windows VM via RDP using the Bastion service.</p>
<h3>Connecting to the Windows VM via RDP</h3>
<p>To start an RDP connection to a Windows host, navigate to the virtual machine that you want to connect to. Click on the Connect button. In the pop-out option, select the Bastion option. Click on the &quot;Use Bastion&quot; button on the next screen.</p>
<img src="/images/16171549706063d39a0fac1.png" alt="Option to connect via Bastion">
<p>In the next screen to &quot;Connect using Azure Bastion&quot;, enter the credentials for the VM to which you are trying to connect. Keep the check box for &quot;Open in new window&quot; checked. If you don't select this checkbox then the RDP session will open in another blade to the right. I prefer to open the RDP session in a new window for maximum screen space.</p>
<img src="/images/16171549776063d3a15cb2c.png" alt="Entering Credentials">
<p>Your VM will open in a new tab. You will be able to perform many actions that you can do via direct RDP (with exception of a few).</p>
<img src="/images/16171549836063d3a7d8a71.png" alt="Connected VM">
<h3>Caveats during the connectivity</h3>
<p>There are some caveats when connecting via the Bastion service. Here are a few that I have encountered or was able to generate.</p>
<p>The first one is the pop-up blocked alert. When you are trying to connect using the option to open in the new window, then if you have a pop-up blocker enabled and the Azure portal is not whitelisted then you will see the below alert in the portal when trying to connect. The connection will not succeed.</p>
<img src="/images/16171550396063d3df0bde5.png" alt="Pop up blocked">
<p><strong>Solution</strong>: click on the small icon for blocked pop-up in the address bar of the browser, as shown below, and select to Allow the pop-ups from the Azure portal.</p>
<img src="/images/16171550466063d3e6c5171.png" alt="Allowing Pop ups">
<p>When trying to connect for the first time or trying to connect on a new browser, the Bastion service will request you to allow the Text and Image related clipboard access. I highly recommend allowing this. If you opt to not allow this access then later, you won't be able to share any clipboard with the session.</p>
<img src="/images/16171550566063d3f05a651.png" alt="Text and image clipboard access">
<p>If for any reason the connection is interrupted then you will see a dialog saying &quot;Disconnected&quot;. You will have the option to <strong>Reconnect</strong> or exit by clicking on the <strong>Close</strong> button.</p>
<img src="/images/16171551196063d42fe0568.png" alt="Disconnected pop up">
<p>Connection Error may come e.g. when restarting the VM. It will automatically attempt to reconnect in few seconds. You can also force reconnect by clicking on the &quot;Reconnect&quot; button.</p>
<img src="/images/16171551266063d436edb5f.png" alt="Connection Error">
<p>As you can see that using the Bastion service to connect to the Windows VMs is a very easy and straightforward process. In the next post, we will look at how to connect to a Linux VM via SSH using the Bastion service.</p>]]></description>
<link>https://HarvestingClouds.com/post/simplifying-azure-bastion-3-connecting-to-windows-vms-using-azure-bastion</link>
<pubDate>Thu, 08 Apr 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Simplifying Azure Bastion - 2 Create an Azure Bastion Host</title>
<description><![CDATA[<p>This blog is a part of the Azure Bastion series. You can find the Index of this series here: <a href="https://harvestingclouds.com/post/azure-bastion-series-index/" target="_blank">Azure Bastion Series</a>.</p>
<p>In the previous post, we looked at what the Azure Bastion service is. We discuss its architecture, how it works, key benefits, pricing, etc. In this post, we will look at how to create the Azure Bastion service in the Azure portal.</p>
<p>As we noted in the previous post, the Azure Bastion host is linked to a particular virtual network. In this post, when we will perform the deployment, you will notice that the deployment will be done related to a particular virtual network that we will need to select. This should be the virtual network into which your virtual machines are also deployed which you want to RDP or SSH into via the Azure Bastion service.</p>
<h3>Pre-requisites</h3>
<p>Before you start setting up the Azure Bastion host, you need to have a subnet in your Virtual network where you are deploying it.</p>
<ol>
<li>The subnet must be named <strong>AzureBastionSubnet</strong>.</li>
<li>The subnet must be at least <strong>/26 or larger</strong>.</li>
</ol>
<p>In order to make a connection, the following <strong>roles</strong> are required:</p>
<ol>
<li>Reader role on the virtual machine</li>
<li>Reader role on the NIC with private IP of the virtual machine</li>
<li>Reader role on the Azure Bastion resource</li>
</ol>
<h3>Creating the Azure Bastion host</h3>
<p>To create a Bastion host, you can navigate to the <strong>Bastions</strong> section in the Azure portal. Click on the &quot;+ Add&quot; button to add a new Bastion host.</p>
<img src="/images/16171514666063c5ea66c5f.png" alt="Bastion section">
<p>A new blade for &quot;Create a Bastion&quot; will open up. In this new blade, provide the details for:</p>
<ul>
<li>Subscription and the Resource Group where you want to deploy the Bastion service.</li>
<li>Provide a name for the Bastion host and the region where it should be deployed. Note that the region should be the same as where the Virtual Network and the VMs are located.</li>
<li>Next, select the target Virtual network. It will automatically select the subnet with the name as AzureBastionSubnet. Note that this subnet should have the size as /27 or larger. If there is no such subnet then you will get an error. </li>
<li>Next, either create a new public IP address or use an existing public IP for the Bastion host.</li>
</ul>
<img src="/images/16171514816063c5f962004.png" alt="Create a new Bastion">
<p>In the next screen provide the Tags. Review all the settings and then create the Bastion host. </p>
<p>That's all there is to it. Once the deployment is complete we will be ready to leverage this to connect to the other VMs in the virtual network. </p>
<p>Note that we will not be able to connect to the Bastion host itself. It is a fully managed service and is used under the hood to connect to the VMs via RDP or SSH internally. We will be able to connect to other VMs in the network, over port 443 directly from the Azure portal, directly in the browser.</p>
<p>In the next post, we will look at how to connect to VMs using the Azure Bastion host.</p>]]></description>
<link>https://HarvestingClouds.com/post/simplifying-azure-bastion-2-create-an-azure-bastion-host</link>
<pubDate>Wed, 07 Apr 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Simplifying Azure Bastion - 1 What is Azure Bastion</title>
<description><![CDATA[<p>This blog is a part of the Azure Bastion series. You can find the Index of this series here: <a href="https://harvestingclouds.com/post/azure-bastion-series-index/" target="_blank">Azure Bastion Series</a>.</p>
<h3>1. Details about Azure Bastion</h3>
<p>Azure Bastion is a fully platform-managed PaaS service that provides RDP/SSH over TLS i.e. port 443 to all the VMs in the network. Think of this as a managed Jump Box or Jump Server service provided by Microsoft. The service is deployed via a managed host into your virtual network. This host is like a fully managed Jump box and is kept up to date by Microsoft. </p>
<p>Note that you will need to deploy one host for each of your virtual networks. Using one host, you will be able to connect to all the VMs in the same virtual network. The Bastion host needs a Public IP address. All the rest of the VMs don't need any public IP address.</p>
<p>Here are key points about Azure Bastion:</p>
<ul>
<li>Azure Bastion provides an integrated platform alternative to manually deploying and managing jump servers or Jump boxes to shield your virtual machines.</li>
<li>It is a fully platform-managed PaaS service</li>
<li>Bastion host is provisioned inside your virtual network</li>
<li>Secure and seamless RDP/SSH connectivity to your virtual machines directly from the Azure portal over TLS</li>
<li>Virtual machines do not need a public IP address, agent, or special client software</li>
<li>One Bastion host is needed per virtual network</li>
<li>The public IP of the Bastion resource on which RDP/SSH will be accessed (over port 443)</li>
<li>You don't need an RDP or SSH client. Use the Azure portal to let you get RDP/SSH access to your virtual machine directly in the browser.</li>
<li>Azure Bastion doesn't move or store customer data out of the region it is deployed in.</li>
</ul>
<h3>2. Shortcomings or Areas of Improvement</h3>
<p>The below features are currently either not available or are not supported:</p>
<ul>
<li>IPv6 is not supported. Azure Bastion supports IPv4 only.</li>
<li>Only text copy/paste is supported. Features, such as file copy, are not supported yet.</li>
<li>It doesn't work with AADJ VM extension-joined machines using Azure AD users.</li>
<li>Azure Bastion currently supports only en-us-qwerty keyboard layout inside the VM.</li>
</ul>
<h3>3. Architecture - How Azure Bastion works</h3>
<p>Bastion Hosts are Jump servers that are deployed with a public IP address. These reside at the perimeter of your network. All the other VMs do not need any public IP address. You will be able to connect to the VMs via this Bastion host. The VMs can be deployed in one or many different subnets in the same virtual network. </p>
<p>You need to have a dedicated subnet with the name as AzureBastionSubnet. This subnet must be at least /27 or larger. This subnet should be deployed before creating the Bastion host.</p>
<p>You connect to the Bastion server directly from within the browser on port 443. The Bastion host in turn connects to the VM at port 3389 for RDP and 22 for SSH.</p>
<p>The subnet for the Azure Bastion host needs to have connectivity to the rest of the subnets. This is available by default. If you have NSGs then Bastion-related communication should be allowed. We will look at these rules in detail in a later post.</p>
<p>Note that UDR is not supported on an Azure Bastion subnet. You don’t need to force traffic from an Azure Bastion subnet to Azure Firewall because the communication between Azure Bastion and your VMs is private.</p>
<p>Microsoft keeps Azure Bastion hardened and always up to date for you to ensure that it can withstand attacks from outside.</p>
<img src="/images/161713753660638f80c8e4f.png" alt="Azure Bastion Architecture">
<h3>4. Benefits of using Azure Bastion</h3>
<p>There are various benefits of using Azure Bastion. The main one being able to RDP/SSH to your VMs without exposing them via any public IP address. Below is the list of various benefits of having Azure Bastion in your virtual networks:</p>
<ul>
<li>Secure and seamless RDP and SSH access to your virtual machines</li>
<li>No Public IP exposure on the VM</li>
<li>Help limit threats such as port scanning and other types of malware targeting your VMs</li>
<li>Fully managed, autoscaling, and hardened PaaS service</li>
<li>Uses a modern HTML5-based web client and standard SSL ports. This makes Firewall and other security rules very easy to manage.</li>
<li>Existing authentication works i.e. Existing credentials and SSH keys will still be used for connecting to the VMs</li>
<li>Same single pane of glass experience to connect to all the VMs</li>
<li>Bastion host servers are designed and configured to withstand attacks. The Azure platform protects against zero-day exploits by keeping the Azure Bastion hardened and always up to date for you.</li>
</ul>
<h3>5. Concurrency Considerations</h3>
<p>Both RDP and SSH are a usage-based protocol. High usage of sessions will cause the bastion host to support a lower total number of sessions. The numbers below assume normal day-to-day workflows.</p>
<table class="table">
<thead>
<tr>
<th>Workload Type*</th>
<th>Limit**</th>
</tr>
</thead>
<tbody>
<tr>
<td>Light</td>
<td>100</td>
</tr>
<tr>
<td>Medium</td>
<td>50</td>
</tr>
<tr>
<td>Heavy</td>
<td>5</td>
</tr>
</tbody>
</table>
<p>These workload types are defined here: <a href="https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/remote-desktop-workloads" target="_blank">Remote Desktop workloads</a></p>
<h3>6. Pricing</h3>
<p>There are two components of the pricing related to Azure Bastion:</p>
<ol>
<li><strong>Fixed charge</strong> for the service. This is the charge billed hourly for deploying the service. E.g. in an East US location, this charge is around $0.19 per hour.</li>
<li><strong>Outbound data transfer charges</strong>. This is the charge based on the total outbound data transfer. This is further tiered into various categories based on the total consumption.</li>
</ol>
<p>The pricing details can be found here: <a href="https://azure.microsoft.com/en-ca/pricing/details/azure-bastion/" target="_blank">Azure Bastion pricing</a></p>
<p>You can read more information about Azure Bastion in the official documentation here: <a href="https://docs.microsoft.com/en-us/azure/bastion/bastion-overview" target="_blank">Azure Bastion Overview</a>. </p>]]></description>
<link>https://HarvestingClouds.com/post/simplifying-azure-bastion-1-what-is-azure-bastion</link>
<pubDate>Sat, 03 Apr 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Service Tags are now supported for Defining Routes in Route Tables</title>
<description><![CDATA[<p>Let us assume that you have to define routing for a particular Azure service e.g. Azure Storage. Any communication to the Azure storage service should happen via your defined route e.g. a virtual appliance. To be able to define this route table you will need to define many routes for all the IP addresses corresponding to the Azure Storage service. Microsoft publishes all these IP addresses in a JSON format that you can leverage. This JSON file is available here: <a href="https://www.microsoft.com/en-us/download/details.aspx?id=56519" target="_blank">Azure IP Ranges and Service Tags – Public Cloud</a></p>
<p>This current process will involve around 504 routes corresponding to each IP address space published by Microsoft for Azure Storage service alone. To make matters even more complex, you have a limit on the total number of routes allowed in a subscription. You can have a max of 200 Route Tables and only 400 routes allowed per Route Tables. You can check all the service limits here: <a href="https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/azure-subscription-service-limits" target="_blank">Azure subscription and service limits, quotas, and constraints</a></p>
<h3>Solution</h3>
<p>Microsoft has published the solution for this problem. You can now leverage the Service Tags when defining User Defined Routes (or UDRs) in a Route Table. I believe this addresses a long-requested feature and makes the task to route any Microsoft service-related communication easy. </p>
<p>This service is currently in <strong>Public Preview</strong> (at the time of writing this blog post). Also, this service is only available via APIs i.e. via REST, PowerShell, CLI, and can also be used in ARM templates. This feature is <strong>not</strong> currently available through the Azure Portal.</p>
<p>You can now specify a Service Tag for the address prefix parameter in a user-defined route for your route table. You can choose from tags representing over 60 Microsoft and Azure services to simplify route creation and maintenance. </p>
<ul>
<li>You no longer need to manually update routes when services change or add to their list of endpoints. Routes with Service Tags will update automatically to include new changes. </li>
<li>This also eliminates the need for regularly updating routes based on the IP data in the weekly JSON file downloads that Microsoft provides. </li>
<li>This also helps reduce the likelihood of running into the routes per route table limit (400) which is common when configuring routing for multiple Microsoft and Azure services. By using Service Tags, you can avoid this, since the tag condenses all ranges for that service into one group. </li>
<li>For example, Microsoft lists more than 4,500 prefixes that collectively represent the Azure address space. You can now use one route with the AzureCloud Service Tag which will include all of these. </li>
</ul>
<p>Learn more about this capability here: <a href="https://azure.microsoft.com/en-us/updates/public-preview-service-tags-for-user-defined-routing/" target="_blank">Public preview: Service Tags for User Defined Routing</a></p>]]></description>
<link>https://HarvestingClouds.com/post/service-tags-are-now-supported-for-defining-routes-in-route-tables</link>
<pubDate>Sat, 27 Mar 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Finding IP address ranges for SUSE Linux Enterprise images to work in Microsoft Azure behind Firewall or NSGs</title>
<description><![CDATA[<p>For the <strong>SUSE Linux Enterprise Server</strong> (<strong>SLES</strong>) images to work in the Microsoft Azure environment, it needs to communicate with the SUSE servers. Specifically, it needs to communicate with the SUSE registration servers and the repository servers. If the VM deployed from SUSE images, is deployed behind a Firewall or NSG then this communication will be blocked by default and will need to be opened up. </p>
<p>So where will we get the IP addresses or URLs for the SUSE server?</p>
<h3>Solution</h3>
<p>The SUSE team publishes the required IP addresses in the below 2 XML files. These are dynamic links that the SUSE team keeps updating and should be referred if any communication is broken.</p>
<p>The <strong>URLs</strong> for the IP addresses are:</p>
<ul>
<li><a href="https://susepubliccloudinfo.suse.com/v1/microsoft/servers/smt.xml" target="_blank"><a href="https://susepubliccloudinfo.suse.com/v1/microsoft/servers/smt.xml">https://susepubliccloudinfo.suse.com/v1/microsoft/servers/smt.xml</a></a></li>
<li><a href="https://susepubliccloudinfo.suse.com/v1/microsoft/servers/regionserver.xml" target="_blank"><a href="https://susepubliccloudinfo.suse.com/v1/microsoft/servers/regionserver.xml">https://susepubliccloudinfo.suse.com/v1/microsoft/servers/regionserver.xml</a></a></li>
</ul>
<p>It is recommended to open the IP addresses for the regions where our VMs are deployed. The required communication occurs on <strong>Port 443</strong> i.e. HTTPS.</p>
<h3>Long term solution</h3>
<p>Long term solution is to deploy a SUSE <strong>RMT</strong> (or <strong>Repository Mirroring Tool</strong>) server in the environment. For older versions of SLES, you may require SMT instead. It is highly recommended to upgrade and use RMT instead. </p>
<p>&quot;<em>RMT server establishes a proxy system for SUSE Customer Center with repositories and registration targets. This helps you to centrally manage software updates within a firewall on a per-system basis while maintaining your corporate security policies and regulatory compliance.</em>&quot;</p>
<p>Here is a guide for configuring the RMT: <a href="https://documentation.suse.com/sles/15-SP1/single-html/SLES-rmt/index.html" target="_blank">Repository Mirroring Tool Guide</a>.</p>]]></description>
<link>https://HarvestingClouds.com/post/finding-ip-address-ranges-for-suse-linux-enterprise-images-to-work-in-microsoft-azure-behind-firewall-or-nsgs</link>
<pubDate>Fri, 26 Mar 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>How to Copy data from Google Cloud Platform (GCP) Storage to Microsoft Azure Storage Block Blob</title>
<description><![CDATA[<p>Copying from Google Cloud to Microsoft Azure has been a challenge that Microsoft has addressed in the latest release of AzCopy. Via the latest release i.e. <strong>v10.9.0</strong>, Microsoft has brought the capabilities importing from Google Cloud Platform (GCP) Storage to Microsoft Azure Storage Block Blob. Additionally, this latest release will also have support for scanning logs that can have low or high output based on the requirements. Tags will be preserved when copying the blobs and the list command will include &quot;Last Modified Time&quot; related information.</p>
<p>General syntax to copy is as shown below:</p>
<pre><code>azcopy copy 'https://storage.cloud.google.com/&lt;bucket-name&gt;/&lt;object-name&gt;' 'https://&lt;storage-account-name&gt;.blob.core.windows.net/&lt;container-name&gt;/&lt;blob-name&gt;'</code></pre>
<p>An example is as below:</p>
<pre><code>azcopy copy 'https://storage.cloud.google.com/mybucket/myobject' 'https://mystorageaccount.blob.core.windows.net/mycontainer/myblob'</code></pre>
<p>This feature is currently in preview at the time of writing of this blog. You can read more about how to practically use AzCopy to copy form the GCP Storage to Microsoft Azure Storage block blob here: <a href="https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-google-cloud?toc=/azure/storage/blobs/toc.json" target="_blank">Copy data from Google Cloud Storage to Azure Storage by using AzCopy</a>.</p>]]></description>
<link>https://HarvestingClouds.com/post/how-to-copy-data-from-google-cloud-platform-gcp-storage-to-microsoft-azure-storage-block-blob</link>
<pubDate>Thu, 25 Mar 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Simplified process for publishing VM Images to Azure Marketplace</title>
<description><![CDATA[<p>Microsoft has announced a new capability now to publish the VM Images to Marketplace. You can now publish a VM Image in Shared Image Gallery (SIG) to Azure Marketplace. This capability simplifies your image preparation, testing, and submission process as you no longer have to extract VHDs, upload them, and generate SAS URIs. With this capability, you can now manage the full image lifecycle within Azure. You can simply create your image from the VM or a VHD into Shared Image Gallery, then select the SIG Image to publish it in Partner Center.</p>
<h2>Step by step process</h2>
<p>Here is how the step by step process looks like:</p>
<ol>
<li>Select an approved base image. This is the Windows or Linux Operating System image that Microsoft has approved to be leveraged as a base for the OS image.</li>
<li>Create a VM from the approved base image. </li>
<li>Configure the VM. In this step, you install the binaries and perform configurations as per your requirements. You also install all the required updates and patches on the VM and ensure that the VM meets the security standards.</li>
<li>Generalize the VM and Capture the image for the VM. </li>
<li>Publish the VM from the Shared Image Gallery (SIG) to the Marketplace</li>
</ol>
<p>The option to Capture the image is from the VM blade click on the Capture button at the top.</p>
<img src="/images/1616970203606101db85ee0.png" alt="Capture option">
<p>During the Capture process ensure to use the option to &quot;Share image to Shared image gallery&quot;</p>
<img src="/images/1616970211606101e3f0fae.png" alt="Option to share image to gallery">
<p>To check the detailed walkthrough check this official documentation: <a href="https://docs.microsoft.com/en-us/azure/marketplace/azure-vm-create-using-approved-base" target="_blank">How to create a virtual machine using an approved base</a></p>]]></description>
<link>https://HarvestingClouds.com/post/simplified-process-for-publishing-vm-images-to-azure-marketplace</link>
<pubDate>Tue, 23 Mar 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Dynamic data masking granular permissions for Azure SQL and Azure Synapse Analytics</title>
<description><![CDATA[<p><strong>Dynamic Data Masking</strong> now supports granular permissions. This support is now available for:</p>
<ul>
<li>Azure SQL Database, </li>
<li>Azure Synapse Analytics, and </li>
<li>Azure SQL Managed Instance</li>
</ul>
<p>This feature allows you to grant and deny <strong>UNMASK permission</strong> at the schema-level, the table-level, and the column-level. This enhancement provides a more granular way to control and limit unauthorized access to SQL assets on Azure and improve data security management.</p>
<p>You can read more about this feature towards the end of the doucmentation here: <a href="https://docs.microsoft.com/en-us/azure/azure-sql/database/dynamic-data-masking-overview#permissions" target="_blank">Dynamic data masking - Permissions section</a></p>]]></description>
<link>https://HarvestingClouds.com/post/dynamic-data-masking-granular-permissions-for-azure-sql-and-azure-synapse-analytics</link>
<pubDate>Thu, 18 Mar 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>SQL insights in Azure Monitor</title>
<description><![CDATA[<p>Microsoft Azure Monitor now has SQL monitoring capabilities. This feature is provided out of the box now. </p>
<p>Using this you can monitor:</p>
<ol>
<li>Azure SQL Databases</li>
<li>Azure SQL Managed Instance</li>
<li>SQL Server on Azure VM </li>
</ol>
<p>Now you can monitor things like:</p>
<ul>
<li>CPU Utilization percentage</li>
<li>Memory Utilization percentage</li>
<li>Data IO percentage</li>
<li>Number of User Connections</li>
<li>Active Temp Tables</li>
<li>Memory Grants Pending</li>
<li>Processes blocked</li>
<li>Deadlocks/sec</li>
<li>Total data file size</li>
<li>Total log file size</li>
<li>Total log file used size</li>
<li>Free space in tempdb</li>
<li>Memory broker clerk size, etc.</li>
</ul>
<h2>Where are the SQL Insights</h2>
<p>You can access the SQL insights by navigating to the Monitor in the Azure portal. And then in the Insights section, you can navigate to the &quot;<strong>SQL (preview)</strong>&quot; option.</p>
<img src="/images/16169741476061114326387.png" alt="SQL Monitoring solution">
<h2>Pricing</h2>
<p>There is no separate pricing for the SQL Insights. All costs are incurred by the virtual machines that gather the data, the Log Analytics workspaces that store the data, and any alert rules configured on the data.</p>
<p>You can learn more about the SQL insights from the official documentation here: <a href="https://docs.microsoft.com/en-us/azure/azure-monitor/insights/sql-insights-overview" target="_blank">Monitor your SQL deployments with SQL insights</a></p>]]></description>
<link>https://HarvestingClouds.com/post/sql-insights-in-azure-monitor</link>
<pubDate>Thu, 18 Mar 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Creating alerts based on Azure Forecasts for spending</title>
<description><![CDATA[<p>You can now create alerts based on the Azure Forecasts. This can help adjust your spending before you hit the budget target. Think about it, if you are going to hit the budget on the 25th day of the month instead of the last day you would want to know about it as early as possible. The earlier you know, the better you can get to manage the spending.</p>
<p>You can customize notifications and also take action if the alert is triggered by leveraging the Action Groups in Azure. You can check how to create Action Groups in this post: <a href="https://HarvestingClouds.com/post/creating-the-action-groups-for-alerts-in-microsoft-azure/" target="_blank">Creating the Action Groups for alerts in Microsoft Azure</a>.</p>
<h3>How to configure Alerts based on Forecasts</h3>
<p>Navigate to &quot;Cost Management + Billing&quot; and select the &quot;<strong>Cost Management</strong>&quot; option.</p>
<img src="/images/1616979222606125161f60d.png" alt="Cost Management">
<p>In the Cost Management blade, select the option for &quot;<strong>Budgets</strong>&quot; and then click on &quot;<strong>+ Add</strong>&quot;.</p>
<img src="/images/161697925560612537d527b.png" alt="Budgets inside Cost Management">
<p>In the next screen under &quot;<strong>Create a budget</strong>&quot; select the appropriate values for the following:</p>
<ul>
<li>Budget scope</li>
<li>Budget name</li>
<li>Reset period - under most circumstances this should be Billing month</li>
<li>Creation date</li>
<li>Expiration date</li>
<li>Provide an amount for the Budget</li>
</ul>
<img src="/images/16169792766061254c6544e.png" alt="Create Budget">
<p>In the next screen for &quot;Set alerts&quot;, under the Alert conditions, select the type as &quot;<strong>Forecasted</strong>&quot;. Provide the percentage of budget. E.g. if you provided the budget as $150 and selected the percentage of budget as 80% then the alert would be triggered at $120.</p>
<p>Provide the Action Group to alert via SMS or emails. You can alert via email by providing individual email ids or distribution list's id for &quot;Alert recipients (email)&quot; entry.</p>
<img src="/images/16169792926061255c9682b.png" alt="Set alerts section">
<p>You can read more about this feature at the official blog here: <a href="https://azure.microsoft.com/en-us/blog/prevent-exceeding-azure-budget-with-forecasted-cost-alerts/" target="_blank">Prevent exceeding Azure budget with forecasted cost alerts</a></p>]]></description>
<link>https://HarvestingClouds.com/post/creating-alerts-based-on-azure-forecasts-for-spending</link>
<pubDate>Tue, 16 Mar 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Configuring the Azure Route Server</title>
<description><![CDATA[<p>In the last 2 posts, we looked at what Azure Route Server is and how to configure it. You can view those posts here: </p>
<ul>
<li><a href="https://HarvestingClouds.com/post/understanding-the-azure-route-server/" target="_blank">Understanding the Azure Route Server</a></li>
<li><a href="https://HarvestingClouds.com/post/creating-the-azure-route-server/" target="_blank">Creating the Azure Route Server</a></li>
</ul>
<p>In this post, we will look at configuring the Azure Route Server. The below steps are involved in configuring the Route Server:</p>
<ol>
<li>Set up the peering by adding a Peer. This could be a Network Virtual Appliance or NVA. </li>
<li>Establish BGP session from the NVA</li>
<li>Optionally you can configure route exchange if you are trying to connect to ExpressRoute gateway or a VPN gateway. </li>
</ol>
<p>Let's look at these steps in detail. </p>
<h3>Set up the peering</h3>
<p>Navigate to the Route server by click on this link: <a href="https://aka.ms/routeserver" target="_blank">Azure Route Server</a>.</p>
<p>Click on the &quot;<strong>Peers</strong>&quot; and then on the right side click on the &quot;<strong>+ Add</strong>&quot; button to add the peering. A pop-up window will open on the side to &quot;<strong>Add Peer</strong>&quot;. In here enter the values for the NVA. Specifically, enter a name you want to give to this peering. And enter the ASN or <em>Autonomous Systems Number</em> for your NVA. Also, enter the  IP address of the NVA the Route Server will communicate with to establish BGP.</p>
<p><strong>Note</strong>: The virtual network where you deployed the Route Server should have connectivity to the IPv4 IP address of the NVA.</p>
<img src="/images/1617039597606210ed9fa47.png" alt="Adding a peer">
<h3>Establish BGP session from the NVA</h3>
<p>The connection with the NVA will not be established until you finish the configurations on the NVA. These configurations will vary depending on the type of NVA (e.g. Check Point, Palo Alto, to name a few). To complete the connection you will need to provide the NVA team with 2 pieces of information regarding the Route Server. These are:</p>
<ol>
<li>Peer IPs for the Route Server</li>
<li>ASN number of the Route Server</li>
</ol>
<p>You can get this information from the Overview page of the Route Server as shown below.</p>
<img src="/images/1617039616606211003a605.png" alt="Getting information for IP and ASN of Route Server">
<h3>Configure route exchange - for ExpressRoute gateway or a VPN gateway</h3>
<p>If you have an ExpressRoute gateway or VPN gateway or both and you want them to exchange routes with the Route Server, you can enable route exchange. It is simply a toggle setting in the Route server. You can access this from the Configurations section under the settings. Select &quot;Enabled&quot; for the &quot;Branch-to-branch&quot; setting and click on the Save button to save the settings.</p>
<img src="/images/16170396306062110ea0a26.png" alt="Configuring Route Exchange">
<p>You can find all the Route Server related official documentation here: <a href="https://docs.microsoft.com/en-us/azure/route-server/" target="_blank">Azure Route Server documentation</a></p>]]></description>
<link>https://HarvestingClouds.com/post/configuring-the-azure-route-server</link>
<pubDate>Mon, 15 Mar 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Creating the Azure Route Server</title>
<description><![CDATA[<p>In the previous blog post we looked at what an Azure Route Server is and what are its benefits. You can view the previous post here: <a href="https://HarvestingClouds.com/post/understanding-the-azure-route-server/" target="_blank">Understanding the Azure Route Server</a></p>
<p>In this blog post, we will be setting up an <strong>Azure Route Server</strong> via the Azure Portal.</p>
<h3>Pre-requisite</h3>
<p>If you are setting up Azure Route Server for your Network Virtual Appliance or NVA, the two key pre-requisites are:</p>
<ol>
<li>For you to be able to set up an Azure Route Server, you need to have an empty subnet in your virtual network with the name &quot;<strong>RouteServerSubnet</strong>&quot;. This subnet should have an address space with a prefix of at least &quot;<strong>/27</strong>&quot; or higher.</li>
<li>The NVA should allow the BGP protocol to leverage the benefits of having the Azure Route Server.</li>
</ol>
<h3>Finding the Route Server section</h3>
<p>To find the section for Route Server navigate to this direct link: <a href="https://aka.ms/routeserver" target="_blank">Azure Route Server</a></p>
<p>To create a new Route Server, simply click on the &quot;<strong>+ Create new route server</strong>&quot; button.</p>
<img src="/images/16170339076061fab3c3d27.png" alt="Route Servers">
<h3>Configurations</h3>
<p>In the Basics section, provide the values for Subscription, Resource Group, Instance name of the route server, and Region for the deployment. Next you will need to select the Virtual network in the same region where you want to deploy the route server. The wizard will automatically select the subnet with the name &quot;RouteServerSubnet&quot;. As noted earlier, this subnet should have at least a &quot;/27&quot; prefix for the address space.</p>
<img src="/images/16170345396061fd2b4b162.png" alt="Basics">
<p>Next, provide any Tags, and then click on &quot;Review + create&quot; and finally click on the &quot;Create&quot; button to create the route server.</p>
<img src="/images/16170347226061fde24cce5.png" alt="Create the Route Server">
<p>Behind the scene it creates two resources of following types:</p>
<ul>
<li>Microsoft.Network/<strong>virtualHubs</strong></li>
<li>Microsoft.Network/virtualHubs/<strong>ipConfigurations</strong></li>
</ul>
<p>If you check the Resource Group that you used for deployment, it will look empty. But when you click on the check box for &quot;Show hidden types&quot; you will be able to view the Virtual Hub that was deployed for the Route Server. </p>
<img src="/images/161703712860620748445e8.png" alt="Hidden type for Virtual Hub">
<p>In the next post, we will look at configuring the Route Server. You can view the post here: <a href="https://HarvestingClouds.com/post/configuring-the-azure-route-server/" target="_blank">Configuring the Azure Route Server</a></p>]]></description>
<link>https://HarvestingClouds.com/post/creating-the-azure-route-server</link>
<pubDate>Mon, 15 Mar 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Understanding the Azure Route Server</title>
<description><![CDATA[<p><strong>Azure Route Server</strong> is a new offering from Microsoft that simplifies the routing in your infrastructure, especially if you have a Network Virtual Appliance or NVA. It eliminates the need to manually configure or maintain route tables. It is a fully managed service and is configured with high availability.</p>
<p>This service works with:</p>
<ol>
<li>Network Virtual Appliance or NVA</li>
<li>ExpressRoute</li>
<li>Azure VPN Gateway</li>
</ol>
<p>You can enable or disable the route exchange on the Azure Route Server with a simple command. Also, you don't need to manually configure or maintain route tables.</p>
<p>With NVAs, it allows you to exchange routing information directly through Border Gateway Protocol (BGP) routing protocol between any NVA that supports the BGP routing protocol and the Azure Virtual Network.</p>
<h3>Pre-requisite</h3>
<ol>
<li>For you to be able to set up an Azure Route Server, you need to have an empty subnet in your virtual network with the name &quot;<strong>RouteServerSubnet</strong>&quot;. This subnet should have an address space with a prefix of at least &quot;<strong>/27</strong>&quot; or higher.</li>
<li>The NVA should allow the BGP protocol to leverage the benefits of having the Azure Route Server.</li>
</ol>
<h3>Benefits</h3>
<p>Azure Route Server simplifies the configuration, management, and deployment of your NVA in your virtual network.</p>
<ol>
<li>You no longer need to manually update the routing table on your NVA whenever your virtual network addresses are updated.</li>
<li>You no longer need to update User-Defined Routes manually whenever your NVA announces new routes or withdraw old ones.</li>
<li>You can peer multiple instances of your NVA with Azure Route Server.</li>
<li>The interface between NVA and Azure Route Server is based on a common standard protocol i.e. BGP.</li>
<li>You can deploy Azure Route Server in any of your new or existing virtual networks.</li>
</ol>
<p>In the next blog post, we will create an Azure Route Server in the Azure portal. You can view that post here: <a href="https://HarvestingClouds.com/post/creating-the-azure-route-server/" target="_blank">Creating the Azure Route Server</a></p>]]></description>
<link>https://HarvestingClouds.com/post/understanding-the-azure-route-server</link>
<pubDate>Thu, 11 Mar 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Backup - Managing the backup-related alerts via the Azure Monitor</title>
<description><![CDATA[<p>You can now manage <strong>backup-related alerts</strong> via the standard <strong>Azure Monitor</strong> experience for alert management. This integration ensures that you can leverage the alert management experience within the &quot;Azure Monitor&quot; as a single pane of glass for managing all alerts in your environment.</p>
<p>This also means that you can leverage Action Groups to send notifications to email, SMS, Azure app push notifications, and voice calls. You can also leverage Action Groups to take actions when an alert is triggered.</p>
<p>All you need to do is navigate to the Alerts section in the Monitor and you will be able to monitor and manage the alerts directly from this pane.</p>
<p>Currently, this feature is supported for the following workloads: Azure Databases for PostgreSQL server, Azure Blobs, and Azure Managed Disks. Support for other workloads will be added in the near future. Alerts are currently generated for security-related scenarios and job failure scenarios, with more enhancements planned in the short term.</p>
<p>You can read more about this feature here: <a href="https://docs.microsoft.com/en-us/azure/backup/backup-azure-monitoring-built-in-monitor#azure-monitor-alerts-for-azure-backup-preview" target="_blank">Azure Monitor alerts for Azure Backup</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-backup-managing-the-backup-related-alerts-via-the-azure-monitor</link>
<pubDate>Fri, 05 Mar 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Auditing of Microsoft operations in an Azure SQL Database</title>
<description><![CDATA[<p>Microsoft has added the capability to audit the actions of Microsoft support engineers when helping you to troubleshoot issues related to your Azure SQL Databases. This option is easy to configure and is available with the auditing capabilities of the Azure SQL Database from the Server level auditing.</p>
<p>To configure this, navigate to the Azure SQL Database and click on the &quot;<strong>Auditing</strong>&quot; option under the security settings. In the right pane, select the &quot;<strong>View server settings</strong>&quot;.</p>
<img src="/images/161699778560616d990d05f.png" alt="Auditing option">
<p>In the server level auditing settings, toggle the setting for &quot;<em>Enable Auditing of Microsoft support operations</em>&quot; to audit all the activities done by the Microsoft support.
You can select the target of the audit logs as either one or many selections from the below:</p>
<ol>
<li>Storage</li>
<li>Log Analytics</li>
<li>Event Hub</li>
</ol>
<img src="/images/161699780160616da90e305.png" alt="Enable Auditing of Microsoft support operations">
<p>This strengthens the security posture even further for your SQL Databases.</p>]]></description>
<link>https://HarvestingClouds.com/post/auditing-of-microsoft-operations-in-an-azure-sql-database</link>
<pubDate>Thu, 04 Mar 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Microsoft retiring many older services by 29 February 2024</title>
<description><![CDATA[<p>Microsoft has recently announced that they are retiring a lot of older services. They have provided a generous deadline of &quot;29th February, 2024&quot; for many services to move to the newer version of these services. On that day the older services will be deleted by Microsoft. </p>
<p>I have created a list of the prominent ones that Microsoft has announced will retire on 29th Feb, 2024 are:</p>
<ul>
<li>Update your scripts to use Az PowerShell modules</li>
<li>Switch to Azure Data Lake Storage Gen2</li>
<li>Classic Application Insights</li>
<li>AKS legacy Azure AD integration</li>
<li>Jenkins plug-ins for Azure</li>
<li>Network Performance Monitor</li>
<li>Azure Network Watcher Connection Monitor (classic)</li>
<li>Azure Batch ‘CloudServiceConfiguration’ pools</li>
<li>Azure Application Gateway analytics</li>
<li>Switch Azure Media Services REST API and SDKs to v3</li>
<li>Classic Azure Migrate</li>
<li>Update the Azure Cosmos DB Java SDK</li>
<li>Azure Stack Edge Pro FPGA</li>
<li>Azure Batch rendering VM images &amp; licensing</li>
<li>The standard version of Custom Voice</li>
<li>Azure Cognitive Services Text Analytics v2.x</li>
<li>Upgrade your Azure AD Connect sync to a newer version</li>
<li>Azure Batch Transcription and Customization Rest API v2</li>
</ul>
<p>There are few other services that Microsoft has announced will retire a bit earlier.</p>
<p>At least the first one in the above list can be easily taken care of. You can automatically migrate your scripts at least to the newer Az PowerShell modules by leveraging the migration tool and steps from here: <a href="https://docs.microsoft.com/en-us/powershell/azure/quickstart-migrate-azurerm-to-az-automatically?view=azps-5.7.0" target="_blank">Automatically migrate PowerShell scripts from AzureRM to the Az PowerShell module</a>.</p>]]></description>
<link>https://HarvestingClouds.com/post/microsoft-retiring-many-older-services-by-29-february-2024</link>
<pubDate>Thu, 25 Feb 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Backup - Cross Region Restore (CRR) for Azure Virtual Machines is now generally available</title>
<description><![CDATA[<p>We discussed the Cross Region Restore (CRR) for Azure Virtual Machines in Azure Backup in an earlier post. You can view that post here: <a href="https://HarvestingClouds.com/post/azure-backup-cross-region-restore-crr-for-azure-virtual-machines-using-azure-backup/" target="_blank">Cross Region Restore (CRR) for Azure Virtual Machines using Azure Backup</a></p>
<p>Now, this feature for Virtual Machines is <strong>generally available</strong>. </p>
<p>Note that:</p>
<ul>
<li>Azure zonal pinned VMs that are backed up with RS vault enabled with this feature will now be provided with options to restore in any zone of customer choice.</li>
<li>Azure Backup supports all <strong>managed and unmanaged VMs</strong> for Cross Region Restore. Classic VMs remains to be unsupported.</li>
<li>Microsoft is starting with the <strong>replication RPO of up to 12 hours</strong> in the secondary region even though the storage SLA of geo-replication is 15 mins.</li>
<li>The pricing is unchanged and stays the same. This is really good of Microsoft to not increase the pricing of the feature.</li>
</ul>
<p>While Microsoft has moved the preview of  CRR for Azure VMs to GA, CRR for SQL/SAP HANA databases running in Azure VMs will continue to be in preview.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-backup-cross-region-restore-crr-for-azure-virtual-machines-is-now-generally-available</link>
<pubDate>Sat, 20 Feb 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Enabling soft delete for blobs in an Azure Storage Account</title>
<description><![CDATA[<p>Data protection is a key feature that should always factor into the security posture of your organization. Imagine if someone gets access to your Storage Account and deletes the contents of a Storage Account. Or if an employee accidentally deletes the contents of a container inside the Storage account. </p>
<p>Microsoft has added the support for soft delete. You can enable this while creating the storage account under the &quot;Data protection&quot;. You have 4 key options that you can enable. These options are:</p>
<ol>
<li>Turn on point-in-time restore for container</li>
<li>Turn on soft delete for blobs</li>
<li>Turn on soft delete for containers</li>
<li>Turn on soft delete for file shares</li>
</ol>
<p>These options are shown below.</p>
<img src="/images/161699099660615314d836a.png" alt="Soft delete options in a Storage Account creation">
<h3>Accessing the Soft Delete capabilities for a blob</h3>
<p>If you have enabled the soft delete for blobs, then you will view a toggle to &quot;<strong>Show deleted blobs</strong>&quot;. Any deleted blobs which was deleted within the configured number of days during the creation of the storage account will show up with the status as &quot;Deleted&quot;. </p>
<img src="/images/16169913746061548e330dd.png" alt="Soft delete option after blob deletion">
<p>You can then either right-click or click on the three dots at the end of the line for the blob with Status as &quot;Deleted&quot;. You will see the option for &quot;View previous versions&quot;. Click on this option to see any deleted versions of this blob.</p>
<img src="/images/16169915476061553bf39a3.png" alt="View Previous Versions">
<p>From the deleted versions of the blob, you can select any of the version and then you can either download that blob (by clicking on the <strong>Download version</strong> button) or restore it (by clicking on the <strong>Make current version</strong> button).</p>
<p>Alternatively, you can click on the deleted blob's name and a pane will open up. In this pane, you can click on the &quot;Undelete&quot; button. If you have multiple versions then to undelete you will have to select the version first.</p>
<img src="/images/161699157160615553cb0ef.png" alt="Restore or Download Previous version">
<p>After restoring the blob, it will show up with the status as Active.</p>
<p><strong>Note</strong>: </p>
<ol>
<li>The soft delete feature retains the data for only a fixed number of days. You define this number based on your recovery strategy during the creation of the storage account. </li>
<li>If you delete the Storage Account then you can't restore the blobs. </li>
</ol>
<p>You can read more about this feature in the official documentation here: <a href="https://docs.microsoft.com/en-us/azure/storage/blobs/soft-delete-blob-overview" target="_blank">Soft delete for blobs</a></p>]]></description>
<link>https://HarvestingClouds.com/post/enabling-soft-delete-for-blobs-in-an-azure-storage-account</link>
<pubDate>Mon, 15 Feb 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Setting up the Routing Preferences in Azure</title>
<description><![CDATA[<p>Routing preferences determine how your traffic routes between Azure and the Internet. There are two options to select when it comes to routing:</p>
<ol>
<li>Microsoft Network</li>
<li>Internet</li>
</ol>
<p>Selecting <strong>Microsoft global network</strong> delivers traffic via Microsoft global network closest to the user. <strong>Internet</strong> route option uses transit ISP network. Egress data transfer price varies based on the routing selection. The Microsoft global network has lower latency and is very fast. It is also very secure and is the default choice. You should select this option for any enterprise customers. Internet option on the other hand provides a cost-optimized alternative. </p>
<h3>How to set Routing Preferences for a VM</h3>
<p>You can set the Route Preferences for a <strong>VM</strong> via its public IP address. The public IP address can be associated with resources such as virtual machines, virtual machine scale sets, internet-facing load balancers, etc. </p>
<p>While creating the public IP address, you can provide the preference as shown below.</p>
<img src="/images/1616986538606141aa081ec.png" alt="Routing Preference">
<h3>How to set Routing Preferences for a Storage Account</h3>
<p>You can also set the routing preference for <strong>Azure storage</strong> resources such as blobs, files, web, and Azure DataLake. This is done when creating the Azure Storage Account. The setting is under the <strong>Networking</strong> part of the creation wizard as shown below.</p>
<img src="/images/16169902506061502acf0c0.png" alt="Routing preferences for a Storage Account">
<p><strong>Note</strong>: </p>
<ol>
<li>The routing preference option of a Public IP can’t be changed once created.</li>
<li>By default, traffic is routed via the Microsoft global network for all Azure services.</li>
</ol>
<p>You can read more about Routing preferences here: <a href="https://docs.microsoft.com/en-us/azure/virtual-network/routing-preference-overview" target="_blank">Routing Preference - Microsoft documentation</a></p>]]></description>
<link>https://HarvestingClouds.com/post/setting-up-the-routing-preferences-in-azure</link>
<pubDate>Wed, 10 Feb 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Backup - Configuring Azure Backup for Azure File Shares</title>
<description><![CDATA[<p>Azure Recovery Services Vaults can backup File shares within the Azure Storage Accounts. This provides an additional layer of security from any accidental deletion of the File share contents or any corruption of the data on the files in the file share.</p>
<p><strong>Note</strong>: This blog post assumes that you already have a recovery service vault. If you don't you can create one easily by navigating to the Recovery service vault section in the Azure portal. </p>
<h3>Step by step backup configurations</h3>
<p>To backup the file share, navigate to the backup vault and go to the &quot;<strong>Protected items</strong>&quot; and click on the &quot;<strong>Backup items</strong>&quot;. On the right-hand side, click on the &quot;<strong>Azure Storage (Azure Files)</strong>&quot;.</p>
<img src="/images/1616996299606167cb9f0db.png" alt="Backup Items">
<p>From the Backup items pane, you can select the &quot;<strong>+ Add</strong>&quot; button to add new backup items to the vault.</p>
<img src="/images/1616996331606167eb5a048.png" alt="Add new">
<p>In the Backup Goal, select the &quot;Azure File Share&quot; for the question &quot;What do you want to backup?&quot;. Click on the Backup button to start the configurations.</p>
<img src="/images/16169964976061689194f24.png" alt="Backup Goal">
<p>The next step is to select the Storage Account on which the File share exists. Once the storage account is selected, you can select the File share by clicking on the &quot;<strong>Add</strong>&quot; button, selecting the file share, and then clicking Ok in the popup.</p>
<img src="/images/161699688160616a11e2817.png" alt="Storage Account and File Share selection">
<p>The next step is to configure the backup policy. By default, a policy is created for you. It is highly recommended to customize this policy as per your business requirements. At a bare minimum, you should provide a unique and descriptive name for this policy. Remember that this policy can be reused multiple times for other File share backups as well. </p>
<p>Also, provide a backup schedule and the retention of daily backup points. Optionally, you can configure weekly, monthly, and yearly backup retention values. </p>
<p>Once the backup policy is created, click on the &quot;Enable backup&quot; button to submit the deployment to configure the backup.</p>
<img src="/images/161699692560616a3d6111f.png" alt="Backup policy and Enable backup">
<p>Note that when you configure the backup it creates a lock on the Storage Account. You should not delete this lock otherwise the backup will not work as desired.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-backup-configuring-azure-backup-for-azure-file-shares</link>
<pubDate>Tue, 02 Feb 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Connectivity Requirements for RDP Licensing Server Connectivity and Firewall Rules</title>
<description><![CDATA[<p>If you deploy an RDP License on a VM in Azure, then that VM needs to be able to talk to the RDP Licensing Server. The RDP License allows multiple users to log into the VM at the same time. By default, only 2 users can log into the VM concurrently. RDP License lets you increase that number based on the license. This is a typical use case for a Jump box VM deployment in Microsoft Azure.</p>
<p>For the RDP license to work, it needs to validate with the RDP Licensing Server. For this, the Firewall needs to allow communication. To be able to allow this communication, you need to know the following:</p>
<ol>
<li>The IP address of the <strong>VM</strong> where the RDP license is configured</li>
<li>The IP address of the <strong>RDP Licensing Server</strong></li>
<li>The <strong>port numbers and the protocol</strong> on which to allow the communication</li>
</ol>
<p>The port numbers on which the communication occurs are as below:</p>
<ol>
<li><strong>TCP on port number 135</strong>. This is the main port where communication occurs.</li>
<li><strong>TCP on 49152–65535</strong> i.e. <strong>RPC dynamic address range</strong>. A dynamic port is assigned from this range for validation-related communication.</li>
</ol>
<p>Using this information ensure that the Firewall in your environment is configured appropriately to allow communication.</p>]]></description>
<link>https://HarvestingClouds.com/post/connectivity-requirements-for-rdp-licensing-server-connectivity-and-firewall-rules</link>
<pubDate>Thu, 28 Jan 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Backup - Troubleshooting the backup issues for SQL Server on Azure VM or SAP HANA on Azure VM (Database). </title>
<description><![CDATA[<p>When you have SQL databases on Azure VMs that are syncing with Azure Backup, you may run into various issues. These could be due to configurations or missing dependencies. In this blog post, we will take a look at a few of the common issues that I had faced and possible solutions to these issues.</p>
<h3>1. First thing you should do for SAP HANA on Azure VM (Database).</h3>
<p>Before you go down the path of troubleshooting, make sure that you have re-run the pre-registration script. You will either resolve the issues or will get more understanding of why the issues are coming. You can download the script form this link: <a href="https://aka.ms/scriptforpermsonhana" target="_blank">Pre-registration Script</a></p>
<p>Note that you may need to re-discover the databases after running the script again. If you look in the official troubleshooting guide you will notice that most of the issues are resolved by this one thing i.e. re-running the pre-registration script.</p>
<h3>2. Unhealthy status</h3>
<p><strong>Possible cause # 1</strong>: </p>
<ul>
<li>The unhealthy status could simply mean that you have just configured the backup on a database and there is no backup for the database present in the Azure recovery services vault. In this scenario, you can either wait for the backup schedule to trigger and take the initial backup. Or you can trigger a manual backup.</li>
</ul>
<p><strong>Possible cause # 2</strong>:</p>
<ul>
<li>Another reason could be that the Backup service is not able to take the backups of the SQL databases on the Azure VM. You should navigate to the Monitoring section in the recovery services vault and click on the Backup jobs. Filter the jobs by the name of the database or the Azure VM name. Look for any failed jobs and check the error details. These failed jobs will also contain the details about the reason why the job failed. </li>
</ul>
<p>In one instance, my jobs were failing because the disk drive for Log was completely full. The resolution was to shut down the VM, expand the disk space via the Azure portal and then start the VM. Then expand the disk from the disk manager from within the VM. We manually triggered the full backup and it was successful. This in turn changed the status from Unhealthy to Healthy.</p>
<h3>3. Unreachable or Not reachable status</h3>
<p>The most common cause for this issue is that the Azure Backup service is not able to reach the VM. Chances are that the backup-related communication is being blocked somewhere in the network. The backup agent indicates to the VM that it should send the backup to Azure. The communication happens mostly <strong>Outbound from the VM</strong> to the below services in Azure:</p>
<ol>
<li>Azure Backup</li>
<li>Azure Storage</li>
<li>Azure Active Directory</li>
</ol>
<p>If any of the services is not allowed then the status changes to Unreachable. </p>
<p>You may have two or both scenarios in your environment:</p>
<ol>
<li>Your VM has an NSG linked to the VM's network interface card or an NSG linked to the subnet in which the VM exists.</li>
<li>Your VM is in an environment where all communication is gated via a Network Virtual Appliance (NVA) or Firewall.</li>
</ol>
<p>If you have NSGs then skip to the last section to see how to configure the NSGs for the backup service to work. If you have Firewall then continue here on how to troubleshoot.</p>
<p>You should look for outbound communication from the VM to these services in the Firewall if you have any. You will need to validate against the public IP addresses for these public services. But how will you find the IP addresses for these. We will look at that next.</p>
<h3>4. How to find the IP addresses for Azure services</h3>
<p>Microsoft publishes the public IP addresses for all of its services in a <strong>JSON file</strong>. This is categorized by Service tags. These service tags correspond to each of the Azure services. </p>
<p>You can download this JSON file by navigating to this link: <a href="https://www.microsoft.com/en-us/download/details.aspx?id=56519" target="_blank">Azure IP Ranges and Service Tags – Public Cloud</a></p>
<p>After downloading this JSON, search for &quot;Storage&quot;, &quot;Active Directory&quot; and &quot;Backup&quot; services to find the section which defines a list of public IP addresses for that service. Note that there will be multiple IP address ranges for each service. Look for the region-specific ranges where you have deployed the VM and the backup vault.</p>
<p>If you find an IP address from the list being blocked for any of the 3 services mentioned above, then you should follow the next section and work with your Firewall team to open these required rules. </p>
<h3>5. Configuration of the Firewall to allow Azure backup</h3>
<p>Note this only applies to the backup of SQL Server on Azure VM or SAP HANA on Azure VM (Database). Allow access to the following services in the Firewall for the network infrastructure where the Azure VM exists.</p>
<table class="table"><caption class="visually-hidden">Allow access to service FQDNs</caption>
<thead>
<tr>
<th>Service</th>
<th>Domain  names to be accessed</th>
</tr>
</thead>
<tbody>
<tr>
<td>Azure  Backup</td>
<td><code>*.backup.windowsazure.com</code></td>
</tr>
<tr>
<td>Azure  Storage</td>
<td><code>*.blob.core.windows.net</code> <br><br> <code>*.queue.core.windows.net</code></td>
</tr>
<tr>
<td>Azure  AD</td>
<td>Allow  access to FQDNs under sections 56 and 59 according to <a href="https://docs.microsoft.com/en-us/microsoft-365/enterprise/urls-and-ip-address-ranges?view=o365-worldwide#microsoft-365-common-and-office-online" target="_blank">this article</a></td>
</tr>
</tbody>
</table>
<h3>6. NSG configurations to allow Azure backup</h3>
<p>Note this only applies to the backup of SQL Server on Azure VM or SAP HANA on Azure VM (Database). </p>
<p>Go to the NSG i.e. Network Security Group of the Azure VM which is either linked via subnet or via the network interface of the VM. Navigate to the Outbound rules and add three rules to allow communication to the below destination Service Tags on all ports:</p>
<ol>
<li>Storage</li>
<li>Azure Backup</li>
<li>Azure Active Directory</li>
</ol>
<p>The rules will look like below:</p>
<img src="/images/16170808506062b212f0139.png" alt="NSG Rules">
<h3>7. Permissions issue or UserErrorSQLNoSysadminMembership</h3>
<p>If you are getting the error for &quot;UserErrorSQLNoSysadminMembership&quot; then that means the permissions for SQL Server on Azure VM have not been set appropriately. This usually happens if you built the SQL server on the Azure VM yourself. That is you didn't use any marketplace image for SQL on Azure VM. </p>
<p>You need to set the permissions as defined here: <a href="https://docs.microsoft.com/en-us/azure/backup/backup-azure-sql-database#set-vm-permissions" target="_blank">Set VM permissions</a>.</p>
<h3>More Information</h3>
<p>If you are still facing issues, do refer the below links for official documentation: </p>
<ul>
<li><a href="https://docs.microsoft.com/en-us/azure/backup/tutorial-backup-sap-hana-db#prerequisites" target="_blank">Back up SAP HANA databases in an Azure VM</a></li>
<li><a href="https://docs.microsoft.com/en-us/azure/backup/tutorial-sql-backup" target="_blank">Back up a SQL Server database in an Azure VM</a></li>
<li><a href="https://docs.microsoft.com/en-us/azure/backup/backup-sql-server-azure-troubleshoot" target="_blank">Troubleshoot SQL Server database backup by using Azure Backup</a></li>
<li><a href="https://docs.microsoft.com/en-us/azure/backup/backup-azure-sap-hana-database-troubleshoot" target="_blank">Troubleshoot backup of SAP HANA databases on Azure</a></li>
</ul>]]></description>
<link>https://HarvestingClouds.com/post/azure-backup-troubleshooting-the-backup-issues-for-sql-server-on-azure-vm-or-sap-hana-on-azure-vm-database</link>
<pubDate>Wed, 20 Jan 2021 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Backup - Understanding the backup policy for SAP HANA in Azure VM (Database)</title>
<description><![CDATA[<p>Azure Backup can be leveraged to discover and backup the SAP HANA database deployed in the Azure VMs. The backup behavior is governed by the &quot;<strong>Backup Policies</strong>&quot;. There are different policies specific to different kinds of workloads. In this post, we are looking at the backup policy for the backup of the SAP HANA in Azure VM (Database).</p>
<h3>Policy Creation</h3>
<p>Navigate to the recovery services key vault and then under the Manage, click on the <strong>Backup Policies</strong>. To create a new policy click on the &quot;<strong>+ Add</strong>&quot; button.</p>
<img src="/images/1617070042606287da1f8be.png" alt="Backup policies">
<p>For the policy type, select &quot;<strong>SAP HANA in Azure VM (Database)</strong>&quot;. This will open the blade to Create the policy for SAP HANA databases which are deployed in the Azure VMs.</p>
<img src="/images/161707265960629213ac8e0.png" alt="Select Policy Type">
<p>In the blade to create the policy, you need to provide the below information:</p>
<ol>
<li>Policy name - provide a descriptive name</li>
<li>Full Backup - this is the schedule and retention of the full backup of the databases</li>
<li>Differential Backup - this controls the differential backup of the databases</li>
<li>Incremental Backup - this controls the incremental backup of the databases</li>
<li>Log Backup - this defines the log backup for the databases</li>
</ol>
<img src="/images/16170729086062930cc12fb.png" alt="Create Policy">
<p>For the <strong>full backup</strong> in the policy, if you enable it then at minimum you need to define:</p>
<ol>
<li>Frequency - daily or weekly</li>
<li>Time and Time zone - when the backup will occur</li>
<li>Retention for the daily backup. It can be a minimum of 7 days and up to a maximum of 9999 days.</li>
</ol>
<p>Optionally you can also configure the monthly, weekly, and yearly backup retention policies.</p>
<img src="/images/16170729976062936570128.png" alt="Full Backup Policy">
<p>The <strong>log backup policy</strong> determines the behavior of the log backups. At minimum you need to define:</p>
<ol>
<li>The frequency of the log backups - it can range from a minimum of 15 minutes to a maximum of once every 24 hours</li>
<li>Retention of the log backups - it can range from a minimum of 7 days to a maximum of 14 days</li>
</ol>
<img src="/images/16170730066062936e31b78.png" alt="Log Backup Policy">
<h3>Practical Backup Considerations</h3>
<p>Below are some general backup considerations based on my experience in different projects. These are only generic considerations and should be evaluated against your organization's compliance standards and specific requirements.</p>
<ul>
<li>The time for backup should be when the systems are not being used by any of the teams. If you work in global teams i.e. working around the clock, then ensure the timing when the system is under the least load.</li>
<li>Use smaller retention windows in the non-production environments than the production environments</li>
<li>If the data is not critical then usually Full backup is taken Daily and retained for around 14 days in non-prod environments. Weekly, Monthly, and Yearly retention periods are not required. </li>
<li>In the prod environment, the Full backup is still taken Daily but the retention period is increased to more than 30 days. Max you can select is 9999 days. Also, based on your SLAs and compliance standards you should enable the weekly, monthly, and yearly backup retention periods as well.</li>
<li>Differential backup can be skipped in the non-production environments based on the criticality of the data in the lower environments</li>
<li>Log backup can be reduced to once every 2 or 4 hours with a retention period of around 7 days in the non-prod environments. </li>
<li>In the prod environments the Log backup can be set to 1 hour or less. The retention should be around 14 days based on the SLAs. Note that 14 days is the max you can go.</li>
</ul>]]></description>
<link>https://HarvestingClouds.com/post/azure-backup-understanding-the-backup-policy-for-sap-hana-in-azure-vm-database</link>
<pubDate>Mon, 14 Dec 2020 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Backup - Understanding the backup policy for SQL Server in Azure VM</title>
<description><![CDATA[<p>Azure Backup can be leveraged to discover and backup the SQL Server deployed in the Azure VMs. The backup behavior is governed by the &quot;<strong>Backup Policies</strong>&quot;. There are different policies specific to different kinds of workloads. In this post, we are looking at the backup policy for the backup of the SQL Server in Azure VM.</p>
<h3>Policy Creation</h3>
<p>Navigate to the recovery services key vault and then under the Manage, click on the <strong>Backup Policies</strong>. To create a new policy click on the &quot;<strong>+ Add</strong>&quot; button.</p>
<img src="/images/1617070042606287da1f8be.png" alt="Backup policies">
<p>For the policy type, select &quot;<strong>SQL Server in Azure VM</strong>&quot;. This will open the blade to Create the policy for &quot;SQL Server in Azure VM&quot;.</p>
<img src="/images/16170703996062893f52933.png" alt="Select Policy Type">
<p>In the blade to create the policy, you need to provide the below information:</p>
<ol>
<li>Policy name - provide a descriptive name</li>
<li>Full Backup - this is the schedule and retention of the full backup of the databases on the SQL Server in Azure VM.</li>
<li>Differential Backup - this controls the differential backup of the databases on the SQL Server in Azure VM</li>
<li>Log Backup - this defines the log backup for the databases</li>
<li>SQL Backup Compression - you can enable or disable the backup compression. The default value is disabled.</li>
</ol>
<img src="/images/161707060360628a0b9f421.png" alt="Create Policy">
<p>For the <strong>full backup</strong> in the policy, if you enable it then at minimum you need to define:</p>
<ol>
<li>Frequency - daily or weekly</li>
<li>Time and Time zone - when the backup will occur</li>
<li>Retention for the daily backup</li>
</ol>
<p>Optionally you can also configure the monthly, weekly, and yearly backup retention policies.</p>
<img src="/images/161707140060628d2891621.png" alt="Full Backup Policy">
<p>The <strong>log backup policy</strong> determines the behavior of the log backups. At minimum you need to define:</p>
<ol>
<li>The frequency of the log backups - it can range from a minimum of 15 minutes to a maximum of once every 24 hours</li>
<li>Retention of the log backups - it can range from minimum of 7 days to a maximum of 35 days</li>
</ol>
<img src="/images/161707140860628d30af713.png" alt="Log Backup Policy">
<h3>Practical Backup Considerations</h3>
<p>Below are some general backup considerations based on my experience in different projects. These are only generic considerations and should be evaluated against your organization's compliance standards and specific requirements.</p>
<ul>
<li>The time for backup should be when the systems are not being used by any of the teams. If you work in global teams i.e. working around the clock, then ensure the timing when the system is under the least load.</li>
<li>Use smaller retention windows in the non-production environments than the production environments</li>
<li>If the data is not critical then usually Full backup is taken Daily and retained for around 14 days in non-prod environments. Weekly, Monthly, and Yearly retention periods are not required. </li>
<li>In the prod environment, the Full backup is still taken Daily but the retention period is increased to more than 30 days. Max you can select is 9999 days. Also, based on your SLAs and compliance standards you should enable the weekly, monthly, and yearly backup retention periods as well.</li>
<li>Differential backup can be skipped in the non-production environments based on the criticality of the data in the lower environments</li>
<li>Log backup can be reduced to once every 2 or 4 hours with a retention period of 7-14 days in the non-prod environments. </li>
<li>In the prod environments the Log backup can be set to 1 hour or less. The retention should be 14-35 days based on the SLAs. Note that 35 is the max you can go.</li>
</ul>]]></description>
<link>https://HarvestingClouds.com/post/azure-backup-understanding-the-backup-policy-for-sql-server-in-azure-vm</link>
<pubDate>Fri, 11 Dec 2020 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure SQL - Sync to other databases - 4 - Configure sync group</title>
<description><![CDATA[<p>In our last post, we discussed how to add sync members. After the new sync group members are created and deployed, Configure sync group is highlighted in the New sync group page.</p>
<p>To begin navigate to the Azure SQL database and click on the &quot;Sync to other databases&quot; section. Click on the <strong>Tables</strong> section as highlighted below.</p>
<img src="/images/161723418060650904a5f3d.png" alt="Sync Group">
<p>Click on the ellipse in the tables section. It will open up the blade and select the database to which we want to sync and refresh the schema. </p>
<img src="/images/16172341876065090b38177.png" alt="Selecting Tables">
<p>As shown below, select the database and select the tables to sync along with columns selection, and click save.</p>
<img src="/images/16172341946065091270898.png" alt="Populated Table and its Schema">
<p>To trigger the sync process navigate to the Database sync group page and click on sync as shown below. This will ensure that the databases are synced as per the direction you have specified.</p>
<img src="/images/161723420060650918693fe.png" alt="Trigger Sync">
<p>To stop the syncing process, click on the Stop button as shown below. </p>
<img src="/images/161723421460650926ea517.png" alt="Stop Syncing">
<p>After a successful syncing, you will be able to see the synced tables and related schema in the db_test01 database from the db_test02 database.</p>
<p>To read more about the sync groups, check the official documentation here: <a href="https://docs.microsoft.com/en-us/azure/azure-sql/database/sql-data-sync-sql-server-configure" target="_blank">Set up SQL Data Sync between databases in Azure SQL Database and SQL Server</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-sql-sync-to-other-databases-4-configure-sync-group</link>
<pubDate>Thu, 03 Dec 2020 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure SQL - Sync to other databases - 3 - Add sync members</title>
<description><![CDATA[<p>In the previous post, we saw how to create the sync group for the Azure SQL Database. If you haven’t gone through the post on how to create a sync group then please check that post first. In this post, we will learn how to add the sync members for the already created sync group.</p>
<p>To begin, navigate to the existing sync group and click on it to open the new blade as shown below.</p>
<img src="/images/161723379960650787b5b36.png" alt="Databases section under Sync groups">
<p>Click on the Databases section to add the databases as members of the sync group.</p>
<img src="/images/16172338056065078da820a.png" alt="Select Sync members">
<p>Next, click on “Add an Azure Database”. Provide the following details about the target database to add it as a member for syncing.</p>
<img src="/images/1617233811606507932ded7.png" alt="Configure Azure Database">
<p>Once you click Ok, you will see the databases added under the &quot;Member Database&quot; section as shown below.</p>
<img src="/images/161723381760650799cd29e.png" alt="Member Database section">
<p>You can add more azure SQL databases by clicking on the “Add an Azure Database”. If you want to add the on-premises SQL database into the member database then go with the “Add an On-Premises Database” option.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-sql-sync-to-other-databases-3-add-sync-members</link>
<pubDate>Wed, 25 Nov 2020 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure SQL - Sync to other databases - 2 - Create sync group</title>
<description><![CDATA[<p>In the previous blog, you read about what is Sync to other databases features of the Azure SQL databases. In this blog, we will learn how to create the sync group.</p>
<p>To start navigate to the Azure SQL database on which you want to create the sync group. In this click on the &quot;Sync to other databases&quot; option. Click on the “<strong>New Sync Group</strong>” as shown below. It will open the “Create Data Sync Group” blade.</p>
<img src="/images/16172332036065053355738.png" alt="New sync group option">
<p>This will open the new blade for &quot;Create Data Sync Group&quot;.</p>
<img src="/images/161723320960650539938fe.png" alt="Create Data Sync Group blade">
<p>Following are the details you need to provide while creating the sync group:-</p>
<ol>
<li>Provide the meaningful name to the sync group name.</li>
<li>Sync Metadata Database – Microsoft recommends creating a New database for this property. But for the sake of the demo, I am using the existing database.</li>
<li>Automatic Sync – If you select On, then enter a number and select Seconds, Minutes, Hours, or Days in the Sync Frequency section.
The first sync begins after the selected interval period elapses from the time the configuration is saved.</li>
<li>Conflict Resolution – It has two options: Hub win and Member win. Hub win means when conflicts occur, data in the hub database overwrite conflicting data in the member database.
Member win means when conflicts occur, data in the member database overwrite conflicting data in the hub database.</li>
<li>Private Link – This option is in preview mode as of now and will be managed by Microsoft.
Once you click on Ok, It will create the Sync group as shown below.</li>
</ol>
<img src="/images/16172332156065053f084d8.png" alt="Added Sync Groups">
<p>You can add more azure SQL databases by clicking on the “Add an Azure Database”. If you want to add the on-premises SQL database into the member database then go with the “Add an On-Premises Database” option.</p>
<p>That's all there is to it. In the next posts, we will delve deeper into sync group and related options.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-sql-sync-to-other-databases-2-create-sync-group</link>
<pubDate>Tue, 17 Nov 2020 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure SQL - Sync to other databases - 1 - Overview</title>
<description><![CDATA[<p>“Sync to other databases” is a feature of the Azure SQL Databases. Azure SQL Data Sync is a service that is used to replicate the tables in the Azure SQL database to another Azure SQL database or on-premises databases. Data can be replicated one way or bidirectional. The important note is that Azure Managed Instance doesn’t support the Sync Data feature as of now. </p>
<p>The concept is around Hub and members databases. All the databases in the member group can sync up with the Hub. The member databases can be on-premises databases or on cloud databases.</p>
<h3>Where to access this feature</h3>
<p>Navigate to the SQL databases and click on the “Sync to other databases” in the settings as shown below. </p>
<img src="/images/1617232808606503a8eecaf.png" alt="Sync to other databases">
<p>In the upcoming posts, we will look into this feature in detail.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-sql-sync-to-other-databases-1-overview</link>
<pubDate>Thu, 12 Nov 2020 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Backup - Backup Center in Microsoft Azure</title>
<description><![CDATA[<p>The <strong>Backup Center</strong> provides a single pane of glass into the whole backup posture of the Azure environment. It provides a <strong>single unified management experience</strong> in Azure for enterprises to govern, monitor, operate, and analyze backups at scale.</p>
<p>With the Backup Center you can manage:</p>
<ol>
<li>Backup instances - all instances across different vaults</li>
<li>Backup policies - all policies across different vaults</li>
<li>Vaults - all Recovery Services Vaults</li>
</ol>
<p>You can also monitor:</p>
<ol>
<li>All backup jobs and filter for Failed jobs</li>
<li>Fetch backup reports</li>
</ol>
<p>You can also control the policy compliance for Backup directly from the backup center.</p>
<p>The backup center is currently supported for Azure VM backup, SQL in Azure VM backup, SAP HANA in Azure VM backup, Azure Files backup, Azure Blobs backup, Azure Managed Disks backup, and Azure Database for PostgreSQL Server backup.</p>
<img src="/images/16169881876061481b27edc.png" alt="Backup Center">
<p>You can learn more about the Backup Center here: <a href="https://docs.microsoft.com/en-us/azure/backup/backup-center-overview" target="_blank">Overview of Backup center</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-backup-backup-center-in-microsoft-azure</link>
<pubDate>Sat, 10 Oct 2020 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Backup - Bring your own keys (BYOK) for recovery services vaults - 5 Assign the encryption key to the Recovery Services vault</title>
<description><![CDATA[<p>In this post, we will assign the key from a Key Vault to a Recovery Services vault. This is to use the custom key with the key vault.</p>
<p>First, make sure that you have a key in the key vault. Also, the key should be at least RSA 2048. The key should be in the <strong>Enabled</strong> state in the key vault. If you don't have one then you can generate one using the key vault. In a practical scenario, you should use your own custom keys. </p>
<p>You can generate a custom key in the Key Vault by navigating to the &quot;Keys&quot; under settings. Next, click on the &quot;+ Generate/Import&quot; button. In the next blade to &quot;Create a key&quot; you can provide the values similar to below to generate a key. Click Create button to create a key.</p>
<img src="/images/1617062101606268d5e9d33.png" alt="Generating a Key">
<p>Now that you have a key in the Key vault, we can use this to encrypt the backup vault (i.e. the recovery services vault). Navigate to the recovery services vault and click on Properties. Here, click on the &quot;Update&quot; link under the Encryption Settings.</p>
<img src="/images/1617062354606269d2c684c.png" alt="Encryption Settings">
<p>To use your own key, select the check box for &quot;Use your own key&quot;. You can then either provide a URI for the key or select a key from a key vault. In this post, we will select the second option. Next, click on the &quot;Select key from Key Vault&quot; link to open a new blade.</p>
<img src="/images/161706253660626a8857e91.png" alt="Selecting the option to use your own key">
<p>In the new blade for &quot;Select key from Azure Key Vault&quot;, select the subscription, Key vault, and the Key that you want to use to encrypt. Click on the Select button and then the Save button to save the settings. </p>
<img src="/images/161706262960626ae540618.png" alt="Selecting the Key from the Key Vault">
<p>When you click Save the setting is not final. It submits a job that you can monitor in the Backup Jobs section under Monitoring in the recovery services vault. Once this job completes successfully, as shown below, only then the configurations are complete.</p>
<img src="/images/161706301660626c685a4ea.png" alt="Backup Job">
<p>Note that once you have enabled this setting you can't revert back to using the platform managed keys. The check box for &quot;Use your own key&quot; in the Encryption settings becomes disabled. You can however update the custom key being used to encrypt everything.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-backup-bring-your-own-keys-byok-for-recovery-services-vaults-5-assign-the-encryption-key-to-the-recovery-services-vault</link>
<pubDate>Sun, 05 Jul 2020 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Backup - Bring your own keys (BYOK) for recovery services vaults - 4 Enable Soft Delete and purge protection on the Azure Key Vault</title>
<description><![CDATA[<p>Soft delete and purge protection provide key vault recovery features. These two features work together and differ only slightly. </p>
<ul>
<li><strong>Soft Delete</strong> - This is to prevent any accidental deletion of the key vault and its related objects like Keys, Secrets, and Certificates, etc. You can recover any deleted items up to a period that you define as a retention period (in a number of days). You can still purge the deleted resources and therefore permanently delete the resources.</li>
<li><strong>Purge Protection</strong> - If you have purge protection, then the soft-deleted items can not be purged till a particular time (specified by the retention period for the Soft Delete). This along with the Soft delete setting ensures that the key vault-related resources that are deleted can not be purged before the time period is expired.</li>
</ul>
<p><strong>Note</strong> that both Soft Delete and Purge protection settings are required to use the key vault with the recovery services vault to provide your custom key for encryption. </p>
<p>These settings are in the Properties section of the Key Vault. When you enable the Soft delete option, you also provide the retention period in days for the deleted vaults.</p>
<img src="/images/16170602196062617b43ef8.png" alt="Soft delete and purge protection">
<p>You can set up the Soft delete and purge protection during the creation of the key vault as well. </p>
<p>When the key vault is deleted, which has the soft delete enabled, then:</p>
<ul>
<li>You can not create any other vault with the same name as the deleted vault</li>
<li>You may list all of the key vaults and key vault objects in the soft-deleted state for your subscription as well as access deletion and recovery information about them. Users should be granted appropriate permissions to be able to list the keys of the deleted vaults.</li>
<li>Only a specifically privileged user may restore a key vault or key vault object by issuing a recover command on the corresponding proxy resource.</li>
<li>Only a specifically privileged user may forcibly delete a key vault or key vault object by issuing a delete command on the corresponding proxy resource.</li>
</ul>
<p><strong>Note</strong>: </p>
<ul>
<li>Unless a key vault or key vault object is recovered, at the end of the retention interval the service performs a purge of the soft-deleted key vault or key vault object and its content. Resource deletion may not be rescheduled.</li>
<li>Once soft delete has been enabled, it cannot be disabled.</li>
</ul>]]></description>
<link>https://HarvestingClouds.com/post/azure-backup-bring-your-own-keys-byok-for-recovery-services-vaults-4-enable-soft-delete-and-purge-protection-on-the-azure-key-vault</link>
<pubDate>Wed, 01 Jul 2020 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Backup - Bring your own keys (BYOK) for recovery services vaults - 3 Assign permissions to the vault to access the encryption key in the Azure Key Vault</title>
<description><![CDATA[<p>In the previous blog, we created a managed identity. Now that identity needs to have permissions to the Key Vault. This will be used by the Azure Backup service to access the keys in the target Key Vault. </p>
<p>Navigate to your Azure Key Vault that you want to use with the Backup vault. Click on the Access policies under Settings. Click on the &quot;+ Add Access Policy&quot; link.</p>
<img src="/images/161705879260625be862fb1.png" alt="Access Policy">
<p>In the new blade to &quot;<strong>Add access policy</strong>&quot; select the &quot;Key permissions&quot; by clicking on the drop down and selecting the below permissions:</p>
<ul>
<li>Get and List under the Key Management Operations</li>
<li>Under the Cryptographic Operations select Unwrap Key and Wrap Key</li>
</ul>
<img src="/images/161705920660625d8641569.png" alt="Key Permissions">
<p>Next, under the Select principal, click on the &quot;None selected&quot; link to bring up the blade to &quot;Select a principal&quot;. Here provide the Object ID of the managed identity of the recovery services vault. This was created in the previous step. Once you enter the Id, the name of the Vault along with the id will appear. Click on the name and it will be added to the &quot;Selected items&quot;. Click on the <strong>Select</strong> button at the bottom of the blade. Then click on the <strong>Add</strong> button to add the access policy.</p>
<img src="/images/161705921360625d8da654a.png" alt="Selecting a Principal">
<p>Note that the policy has not been added yet. To finally add the policy ensure to click on the Save button as shown below. This is a common mistake that I have seen people making and I am guilty of this myself as well. If you navigate away from this screen the policy will not be saved.</p>
<img src="/images/161705922360625d973dd1b.png" alt="Saving the updated access policy">]]></description>
<link>https://HarvestingClouds.com/post/azure-backup-bring-your-own-keys-byok-for-recovery-services-vaults-3-assign-permissions-to-the-vault-to-access-the-encryption-key-in-the-azure-key-vault</link>
<pubDate>Mon, 29 Jun 2020 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Backup - Bring your own keys (BYOK) for recovery services vaults - 2 Enable managed identity for your Recovery Services vault</title>
<description><![CDATA[<p>Azure Backup uses system assigned managed identity to authenticate the Recovery Services vault to access encryption keys stored in the Azure Key Vault. We will be using this identity later to provide permissions inside the Key Vault. </p>
<p>To enable the managed identity for the Recovery Services vault, navigate to the vault and select &quot;<strong>Identity</strong>&quot; under Settings. Toggle the Status to On. Click the save button to save the settings.</p>
<img src="/images/1617058226606259b256fc7.png" alt="Identity settings">
<p>You will get a warning explaining that you are setting up the managed identity for the vault and that the vault will be accessible via Active Directory managed identity. Hit Ok to proceed. Once the settings are saved, you will see an Object Id. Take a note of this Id. You will also see a button for &quot;<strong>Azure role assignments</strong>&quot; where you can manage the permissions for this managed identity.</p>
<img src="/images/161705848260625ab254b59.png" alt="Saved Identity settings">
<p><strong>Note</strong>: </p>
<ul>
<li>A system assigned managed identity is restricted to one per resource.</li>
<li>It is tied to the lifecycle of this resource. This means that when the resource is deleted its managed identity will also be deleted.</li>
<li>You can assign permissions to this identity in RBAC similar to how you assign permissions to individual users</li>
<li>The managed identity is authenticated with Azure AD, so you don’t have to store any credentials in code.</li>
</ul>]]></description>
<link>https://HarvestingClouds.com/post/azure-backup-bring-your-own-keys-byok-for-recovery-services-vaults-2-enable-managed-identity-for-your-recovery-services-vault</link>
<pubDate>Thu, 25 Jun 2020 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Backup - Bring your own keys (BYOK) for recovery services vaults - 1 Overview</title>
<description><![CDATA[<p>You can bring your own keys (<strong>BYOK</strong>) to encrypt the data in the Azure Recovery Services vaults. These keys will be used to encrypt all the data stored for backup for all items. By default, the system uses &quot;<strong>platform-managed keys</strong>&quot;. The custom keys that you provide are referred to as &quot;<strong>customer-managed keys</strong>&quot;.</p>
<p>This provides you with more granular control over the encryption process. This also ensures that only you as a customer control the data encryption and decryption. It is highly recommended for any sensitive data that you are protecting in the cloud.</p>
<p>Please note that:</p>
<ul>
<li>You should configure the keys before protecting any items in the vault</li>
<li>Once you have configured the keys these can not be changed afterward.</li>
<li>The Recovery Services vault can be encrypted only with keys stored in an <strong>Azure Key Vault</strong>, located in the same region. </li>
<li>Also, keys must be <strong>RSA 2048</strong> keys only and should be in the enabled state.</li>
</ul>
<p>To configure your vault to encrypt with customer-managed keys, the below steps must be followed. These steps should be followed in this order only:</p>
<ol>
<li>Enable managed identity for your Recovery Services vault</li>
<li>Assign permissions to the vault to access the encryption key in the Azure Key Vault</li>
<li>Enable soft-delete and purge protection on the Azure Key Vault</li>
<li>Assign the encryption key to the Recovery Services vault</li>
</ol>
<p>We will look at these steps in detail in the following posts.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-backup-bring-your-own-keys-byok-for-recovery-services-vaults-1-overview</link>
<pubDate>Sun, 21 Jun 2020 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Backup - Soft Delete for Recovery Services Vault</title>
<description><![CDATA[<p><strong>Soft Delete</strong> provides an additional layer of security for the data in the recovery services vault. This is like a recycle bin for the data in the vault. It prevents any accidental deletions of the protected items. If this option is enabled then when you delete any protected item, the data for that is retained for 14 days.</p>
<h3>Configurations</h3>
<p>To configure Soft Delete for a recovery services vault, navigate to its Properties. In here, click on the &quot;Update&quot; link under the <strong>Security Settings</strong>. This will open a pop-up blade where you can enable or disable the soft delete option. </p>
<img src="/images/16170599216062605138ee3.png" alt="Security Settings - Soft Delete">
<p><strong>Note</strong>: </p>
<ul>
<li>This option is enabled by default for new Recovery Services vaults. This section gives you the option to disable the setting if you require it.</li>
<li>The additional 14 days of retention for backup data in the &quot;soft delete&quot; state don't incur any cost to you.</li>
</ul>
<p>Soft delete protection is available for these services:</p>
<ul>
<li>Soft delete for Azure virtual machines</li>
<li>Soft delete for SQL Database in Azure VM </li>
<li>Soft delete for SAP HANA in Azure VM workloads</li>
</ul>
<h3>Soft Delete Behavior</h3>
<p>Once you have the soft delete enabled, go ahead and delete a VM from the Backup items under the protected items (non-critical and demo only). After you have deleted the VM, you will see that it is marked as Deleted, as shown below. The VM will continue to appear in the protected items list for the next 14 days. Click on the VM item for more options.</p>
<img src="/images/1617064412606271dc8ffa5.png" alt="Deleted VM from the Backup Items">
<p>In the next blade, you can see that there is an option for &quot;Undelete&quot; to recover the data for the VM's backups. There is also a message that says that when the protected item was deleted and for how long can you recover the data. After this period the data will be permanently deleted.</p>
<img src="/images/1617064419606271e3bd738.png" alt="Undelete option">
<p>You can learn more in the official documentation here: <a href="https://docs.microsoft.com/en-us/azure/backup/backup-azure-security-feature-cloud" target="_blank">Soft delete for Azure Backup</a> </p>]]></description>
<link>https://HarvestingClouds.com/post/azure-backup-soft-delete-for-recovery-services-vault</link>
<pubDate>Sun, 14 Jun 2020 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Backup - Cross Region Restore (CRR) for Azure Virtual Machines using Azure Backup</title>
<description><![CDATA[<p>In the last post, we looked at the redundancy options for a Recovery Services Vault. You can  view that post here: <a href="https://HarvestingClouds.com/post/azure-backup-set-storage-redundancy-for-recovery-services-vault/" target="_blank">Set Storage Redundancy for Recovery Services Vault</a></p>
<p>In this post, we will look at how to take control of the replicated data.</p>
<p>By default, the data that is replicated to the secondary region is available to restore in the secondary region only if Azure declares a disaster in the primary region. But now you can also control the cross-region restore (CRR) yourself. You can trigger a cross-region restore regardless of whether the primary region is available or not. Azure Backup now supports restoring Azure Virtual Machines as well as disks from a secondary region.</p>
<p>Behind the scene, Azure Backup leverages storage accounts’ read-access geo-redundant storage (<strong>RA-GRS</strong>) capability to support restores from a secondary region. If you enable cross-region-restore, Microsoft upgrades your backup storage from GRS to read-access geo-redundant storage (RA-GRS). Charges for storage are separate from the cost of Azure Backup.</p>
<p><strong>Note</strong> that due to delays in storage replication from primary to secondary, there will be <strong>latency</strong> in the backed-up data being available for a restore in the secondary region.</p>
<p>Cross-Region Restore (CRR) supports the following data sources:</p>
<ul>
<li>Azure VMs</li>
<li>SQL databases on Azure VMs </li>
<li>SAP HANA databases on Azure VMs</li>
</ul>
<h3>Configurations</h3>
<p>To update the Cross-Region Restore (CRR) settings: </p>
<ol>
<li>Navigate to the recovery services vault and click on the <strong>Properties</strong> under Settings</li>
<li>Then click on the &quot;<strong>Update</strong>&quot; link under the Backup Configuration</li>
<li>In the Backup Configurations blade, make sure that the setting for redundancy is set to &quot;Geo-Redundant&quot;. Then select &quot;<strong>Enable</strong>&quot; for the &quot;<strong>Cross Region Restore</strong>&quot; setting.</li>
</ol>
<p>Finally, click on the Save button to save the settings.</p>
<img src="/images/16170496926062385cf189b.png" alt="Cross-Region Restore">
<p><strong>Note</strong>: Cross Region Restore is currently non-reversible storage property. This means that you won't be able to disable the property after enabling it. </p>
<p>Once you have the option enabled, when you will navigate to the Protected Items -&gt; Backup Items -&gt; Azure Virtual Machine -&gt; select a VM that is protected and has a valid backup. You will view a new option to &quot;Restore to Secondary Region&quot;. </p>
<img src="/images/16170528126062448cb338f.png" alt="Option to restore the VM to Secondary Region">
<p>To be able to use this option, you don't just need there to be a valid backup but also need that backup to be replicated to the secondary region. If your primary region is &quot;East US&quot; then you need to have at least one copy in the &quot;West US&quot; region. Also, note that you will only be able to restore to a secondary region that is a paired region of the region where you have deployed the resources.</p>
<p>You can find which is the secondary paired region based on the region where you have deployed the resources by referring the table in this link: <a href="https://docs.microsoft.com/en-us/azure/best-practices-availability-paired-regions#azure-regional-pairs" target="_blank">Business continuity and disaster recovery (BCDR): Azure Paired Regions</a> </p>]]></description>
<link>https://HarvestingClouds.com/post/azure-backup-cross-region-restore-crr-for-azure-virtual-machines-using-azure-backup</link>
<pubDate>Thu, 04 Jun 2020 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Backup - Set Storage Redundancy for Recovery Services Vault</title>
<description><![CDATA[<p>When you create a Recovery Services Vault in Azure, the default for the Storage Redundancy now is &quot;Geo-redundant&quot;. Earlier this was not a customizable setting and was default to Locally-redundant. As a Business Continuity and Disaster Recovery (<strong>BCDR</strong>) strategy it makes sense to have Backup as Geo-redundant. This will ensure that the backup will be replicated to a secondary paired region. In the event of disaster level event if the primary region goes down then you will have access to the data in the secondary region. </p>
<p>You can find which is the secondary paired region based on the region where you have deployed the resources by referring the table in this link: <a href="https://docs.microsoft.com/en-us/azure/best-practices-availability-paired-regions#azure-regional-pairs" target="_blank">Business continuity and disaster recovery (BCDR): Azure Paired Regions</a></p>
<h3>Where to update configurations</h3>
<p>To update the redundancy settings: </p>
<ol>
<li>Navigate to the recovery services vault and click on the <strong>Properties</strong> under Settings</li>
<li>Then click on the &quot;Update&quot; link under the Backup Configuration</li>
<li>In the Backup Configurations blade, the setting for redundancy is the first one and is available under the &quot;Storage replication type&quot;</li>
</ol>
<img src="/images/161704808060623210d5ee6.png" alt="Storage Replication Type">
<p><strong>Note</strong>: This setting can not be changed if you have any item being protected in the vault. You will only be able to modify if you don't have anything protected. </p>
<p>By default, the data that is replicated to the secondary region is available to restore in the secondary region only if Azure declares a disaster in the primary region. But now you can also control the cross-region restore (CRR) yourself. We will look at that feature in the next blog post that you can access here: <a href="https://HarvestingClouds.com/post/azure-backup-cross-region-restore-crr-for-azure-virtual-machines-using-azure-backup/" target="_blank">Cross Region Restore (CRR) for Azure Virtual Machines using Azure Backup</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-backup-set-storage-redundancy-for-recovery-services-vault</link>
<pubDate>Tue, 02 Jun 2020 00:00:00 +0500</pubDate>
</item>
<item>
<title>How to get alerts based on your spending in Azure</title>
<description><![CDATA[<p>Azure has very advanced Cost Management capabilities. One of these capabilities is to create alerts based on your spending. You can define budgets and if you are close to spending more than a percentage of that budget then you can get alerted via email, SMS, App push notification, or even a voice call. You can even take actions if the budget exceeds a certain percentage. </p>
<p>You take action if the alert is triggered by leveraging the Action Groups in Azure. You can check how to create Action Groups in this post: <a href="https://HarvestingClouds.com/post/creating-the-action-groups-for-alerts-in-microsoft-azure/" target="_blank">Creating the Action Groups for alerts in Microsoft Azure</a>.</p>
<h3>How to configure the Alerts</h3>
<p>Navigate to &quot;Cost Management + Billing&quot; and select the &quot;<strong>Cost Management</strong>&quot; option.</p>
<img src="/images/1616979222606125161f60d.png" alt="Cost Management">
<p>In the Cost Management blade, select the option for &quot;<strong>Budgets</strong>&quot; and then click on &quot;<strong>+ Add</strong>&quot;.</p>
<img src="/images/161697925560612537d527b.png" alt="Budgets inside Cost Management">
<p>In the next screen under &quot;<strong>Create a budget</strong>&quot; select the appropriate values for the following:</p>
<ul>
<li>Budget scope</li>
<li>Budget name</li>
<li>Reset period - under most circumstances this should be Billing month</li>
<li>Creation date</li>
<li>Expiration date</li>
<li>Provide an amount for the Budget</li>
</ul>
<img src="/images/16169792766061254c6544e.png" alt="Create Budget">
<p>In the next screen for &quot;Set alerts&quot;, under the Alert conditions, provide the percentage of the budget. E.g. if you provided the budget as $150 and selected the percentage of budget as 80% then the alert would be triggered at $120.</p>
<p>Provide the Action Group to alert via SMS or emails. You can alert via email by providing individual email ids or distribution list's id for &quot;Alert recipients (email)&quot; entry.</p>
<img src="/images/16169834056061356dbd7ce.png" alt="Set alerts section">
<p>That's it! Now the alert is up and running. If you want to test the alert then update the budget and the alert percentage to close the spending and the alert should be triggered in short while. </p>
<p>This is a very helpful feature that lets you take the control of your spending and stay on top of the overall usage. </p>]]></description>
<link>https://HarvestingClouds.com/post/how-to-get-alerts-based-on-your-spending-in-azure</link>
<pubDate>Sat, 30 May 2020 00:00:00 +0500</pubDate>
</item>
<item>
<title>Creating the Action Groups for alerts in Microsoft Azure</title>
<description><![CDATA[<p>In the previous post, we discussed what the Action Groups are and what they are used for. You can read the previous blog at this link: <a href="https://HarvestingClouds.com/post/understanding-the-action-groups-for-alerts-in-microsoft-azure/" target="_blank">Understanding the Action Groups for alerts in Microsoft Azure</a>. In this post, we are taking a look in the Azure portal to understand how to create the Action Groups. </p>
<h3>Action Groups Location</h3>
<p>The action groups are located under the Azure Monitor, in the Alerts section. Here, click on the &quot;<strong>Manage actions</strong>&quot; button to modify and create action groups.</p>
<img src="/images/161698167360612ea9b1846.png" alt="Manage actions">
<h3>Creating Action Groups</h3>
<p>In the Manage actions blade, click on the &quot;<strong>+ Add action group</strong>&quot; button. </p>
<img src="/images/161698169360612ebd3cde3.png" alt="Add action group">
<p>Enter the basics information in the first blade to Create an action group.</p>
<img src="/images/161698170760612ecb01b15.png" alt="Basics information">
<p>In the notifications tab, select the type of notifications. Also add additional information for Email, SMS, Azure app Push Notifications, and Voice, etc.</p>
<img src="/images/161698172360612edbd40bc.png" alt="Notification settings">
<p>Under the Actions tab, select if you want to perform any action. The possible values for action are:</p>
<ol>
<li>Automation Runbook</li>
<li>Azure Function</li>
<li>ITSM</li>
<li>Logic App</li>
<li>Secure Webhook</li>
<li>Webhook</li>
</ol>
<img src="/images/161698175260612ef807104.png" alt="Actions inside Action Group">
<p>Next, add the Tags and review the final information. Click on the Create button once done.</p>
<p>Now that the Action Group is ready, you can use this in any alerts that you will create. The notifications and actions inside the Action Group will be triggered whenever the alert is triggered.</p>
<p>For further details, you can refer the official documentation here: <a href="https://docs.microsoft.com/en-us/azure/azure-monitor/alerts/action-groups" target="_blank">Create and manage action groups in the Azure portal</a></p>]]></description>
<link>https://HarvestingClouds.com/post/creating-the-action-groups-for-alerts-in-microsoft-azure</link>
<pubDate>Tue, 26 May 2020 00:00:00 +0500</pubDate>
</item>
<item>
<title>Understanding the Action Groups for alerts in Microsoft Azure</title>
<description><![CDATA[<p>Action Groups is a very useful concept in Microsoft Azure. It is used to define two things when an alert is triggered. The first is to define how to send notifications and to whom when an action group is triggered via an alert. Secondly, it defines if any action should be taken when an alert is fired.</p>
<p>Using the Action Groups you can <strong>send notifications</strong> via the following methods when an action group triggers:</p>
<ol>
<li>Group emails, </li>
<li>SMS, </li>
<li>Azure app notifications, and </li>
<li>Voice calls. </li>
<li>You can also send emails to an Azure Resource Manager role using Action Groups. </li>
</ol>
<p>Optionally, you can also <strong>take actions</strong> using the following features of Azure, when an action group is triggered:</p>
<ol>
<li>Automation Runbook</li>
<li>Azure Function</li>
<li>ITSM</li>
<li>Logic App</li>
<li>Secure Webhook</li>
<li>Webhook</li>
</ol>
<p>Action Groups are located in the Azure Monitor under the &quot;Alerts&quot; section. Navigate to &quot;Manage actions&quot; to create or modify action groups.</p>
<p>In the next blog post, we will look at how to create Action Groups in the Azure Portal. You can check the next blog here: <a href="https://HarvestingClouds.com/post/creating-the-action-groups-for-alerts-in-microsoft-azure/" target="_blank">Creating the Action Groups for alerts in Microsoft Azure</a></p>]]></description>
<link>https://HarvestingClouds.com/post/understanding-the-action-groups-for-alerts-in-microsoft-azure</link>
<pubDate>Mon, 25 May 2020 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Index</title>
<description><![CDATA[<p>This is the Index for the series regarding <strong>Microsoft Azure</strong>  and <strong>Amazon's AWS</strong> services comparisons. This series takes a very <strong>practical</strong> and <strong>hands-on</strong> approach (rather than just focusing on the theory). You don't need to follow all the blog posts. You can only check the ones where you want to draw the comparison or check how to work with specific services.</p>
<p>If you are well versed with one platform this series is to cross-train you on the competitor platform. The intention is to know the pros and cons of each platform and begin with the basic services. You need to know the competition well before you can advocate any platform.</p>
<p><strong>Disclaimer</strong>: I am a Microsoft Azure MVP and a bit biased towards Azure. I make every attempt to view both platforms impartially and list pros and cons as they are without any personal opinions. </p>
<p><strong>Note</strong> that this Index is updated regularly as more posts are added around this topic.</p>
<ol>
<li>
<p><strong>Virtual Machines vs EC2 instances</strong></p>
<p>a. Combined</p>
<ol>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-virtual-machines-vs-ec2-instances-01-finding-resources" target="_blank">Finding Resources</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-virtual-machines-vs-ec2-instances-04-differences-in-iops" target="_blank">Differences in IOPS</a></li>
</ol>
<p>b. Azure</p>
<ol>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-virtual-machines-vs-ec2-instances-03-creating-azure-vms" target="_blank">Creating Azure Virtual Machines</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-virtual-machines-vs-ec2-instances-06-monitoring-of-azure-vms" target="_blank">Monitoring of Azure VMs</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-virtual-machines-vs-ec2-instances-07-boot-diagnostics-of-azure-vms" target="_blank">Boot Diagnostics of Azure VMs</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-virtual-machines-vs-ec2-instances-08-azure-vm-insights" target="_blank">Azure VM Insights</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-virtual-machines-vs-ec2-instances-10-backing-up-azure-vms" target="_blank">Backing up Azure VMs</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-virtual-machines-vs-ec2-instances-11-protecting-azure-vms" target="_blank">Protecting Azure VMs</a></li>
</ol>
<p>c. AWS</p>
<ol>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-virtual-machines-vs-ec2-instances-02-creating-ec2-instances" target="_blank">Creating EC2 Instances</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-virtual-machines-vs-ec2-instances-05-monitoring-of-ec2-instances" target="_blank">Monitoring of EC2 instances</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-virtual-machines-vs-ec2-instances-09-backing-up-ec2-instances-part-1" target="_blank">Backing up EC2 instances Part 1</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-virtual-machines-vs-ec2-instances-09-backing-up-ec2-instances-part-2" target="_blank">Backing up EC2 instances Part 2</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-virtual-machines-vs-ec2-instances-09-backing-up-ec2-instances-part-3" target="_blank">Backing up EC2 instances Part 3</a></li>
</ol>
</li>
<li>
<p><strong>Batch Services</strong></p>
<p>a. Combined</p>
<ol>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-batch-services-azure-vs-aws" target="_blank">Batch Services Comparision</a></li>
</ol>
<p>b. Azure</p>
<ol>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-batch-services-azure-1-creating-batch-account" target="_blank">Creating Batch account</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-batch-services-azure-2-creating-batch-pool-in-an-account" target="_blank">Creating Batch Pool in an Account</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-batch-services-azure-3-creating-job-in-the-batch-pool" target="_blank">Creating Job in the Batch Pool</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-batch-services-azure-4-creating-tasks-in-a-batch-job" target="_blank">Creating Tasks in a Batch Job</a></li>
</ol>
<p>c. AWS</p>
<ol>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-batch-services-aws-working-with-aws-batch-service" target="_blank">Working with AWS Batch Service</a></li>
</ol>
</li>
<li>
<p><strong>Auto Scaling</strong></p>
<p>a. Combined</p>
<ol>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-auto-scaling-azure-vmss-vs-aws-auto-scaling/" target="_blank">Azure VMSS vs AWS Auto Scaling</a></li>
</ol>
<p>b. Azure</p>
<ol>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-auto-scaling-azure-01-creating-vmss-part-1" target="_blank">Creating Virtual Machine Scale Sets (VMSS) - Part 1</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-auto-scaling-azure-01-creating-vmss-part-2" target="_blank">Creating Virtual Machine Scale Sets (VMSS) - Part 2</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-auto-scaling-azure-02-view-running-instances-in-vmss" target="_blank">View Running Instances in VMSS</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-auto-scaling-azure-03-updating-the-scaling-policies-of-vmss" target="_blank">Updating the Scaling policies of VMSS</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-auto-scaling-azure-04-viewing-the-run-history-of-vmss" target="_blank">Viewing the Run History of VMSS</a></li>
</ol>
<p>c. AWS</p>
<ol>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-auto-scaling-aws-01-all-related-resources-for-auto-scaling" target="_blank">All Related Resources for Auto Scaling</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-auto-scaling-aws-02-creating-launch-configurations" target="_blank">Creating Launch Configurations</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-auto-scaling-aws-03-creating-launch-templates/" target="_blank">Creating Launch Templates</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-auto-scaling-aws-04-creating-auto-scaling-groups-part-1" target="_blank">Creating Auto Scaling Groups - Part 1</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-auto-scaling-aws-04-creating-auto-scaling-groups-part-2" target="_blank">Creating Auto Scaling Groups - Part 2</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-auto-scaling-aws-05-aws-auto-scaling-service" target="_blank">AWS Auto Scaling service</a></li>
</ol>
</li>
<li>
<p><strong>Disks</strong></p>
<p>a. Combined</p>
<ol>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-azure-managed-disks-vs-aws-elastic-block-store-ebs" target="_blank">Azure Managed Disks vs AWS Elastic Block Store (EBS)</a></li>
</ol>
</li>
<li>
<p><strong>Storage</strong></p>
<p>a. Combined</p>
<ol>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-storage-microsoft-azure-storage-accounts-vs-aws-simple-storage-service-s3" target="_blank">Microsoft Azure Storage Accounts vs AWS Simple Storage Service (S3)</a></li>
</ol>
<p>b. Azure</p>
<ol>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-storage-azure-01-creating-azure-storage-account" target="_blank">Creating Azure Storage Account</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-storage-azure-02-data-transfer-options" target="_blank">Data Transfer Options</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-storage-azure-03-event-driven-automation-of-azure-storage-accounts" target="_blank">Event driven automation of Azure Storage accounts</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-storage-azure-04-securely-accessing-storage-accounts" target="_blank">Securely Accessing Storage Accounts</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-storage-azure-05-configuring-firewall-and-vnet-access-on-storage-accounts" target="_blank">Configuring Firewall and vNet access on Storage accounts</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-storage-azure-06-geo-replication-on-storage-accounts" target="_blank">Geo Replication on Storage accounts</a></li>
</ol>
<p>c. AWS</p>
<ol>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-storage-aws-01-creating-simple-storage-service-or-s3-buckets" target="_blank">Creating Simple Storage Service or S3 buckets</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-storage-aws-02-working-with-s3-bucket-and-uploading-files" target="_blank">Working with S3 bucket and uploading files</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-storage-aws-03-configuring-s3-bucket-properties" target="_blank">Configuring S3 Bucket Properties</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-storage-aws-04-setting-permissions-on-s3-bucket" target="_blank">Setting Permissions on S3 Bucket</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-storage-aws-05-adding-lifecycle-rules-for-s3-buckets" target="_blank">Adding Lifecycle Rules for S3 buckets</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-storage-aws-06-configuring-replication-on-s3-buckets" target="_blank">Configuring replication on S3 buckets</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-storage-aws-07-setting-up-access-points-on-s3-bucket" target="_blank">Setting up Access Points on S3 bucket</a></li>
</ol>
</li>
<li>
<p><strong>Networking</strong></p>
<p>a. Combined</p>
<ol>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-networking-azure-virtual-networks-vs-aws-virtual-private-cloud-vpc" target="_blank">Azure Virtual Networks vs AWS Virtual Private Cloud (VPC)</a></li>
</ol>
<p>b. Azure</p>
<ol>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-networking-azure-01-creating-virtual-network" target="_blank">Creating Virtual Network</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-networking-azure-02-working-with-virtual-networks-vnets-and-subnets" target="_blank">Working with Virtual Networks (vNets) and Subnets</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-networking-azure-03-service-endpoints" target="_blank">Service Endpoints</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-networking-azure-04-creating-network-security-groups" target="_blank">Creating Network Security Groups (NSGs)</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-networking-azure-05-creating-rules-in-nsgs-and-assigning-nsgs" target="_blank">Creating rules in NSGs and assigning NSGs</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-networking-azure-06-using-service-tags-to-define-rules-in-nsgs/" target="_blank">Using Service Tags to define rules in NSGs</a></li>
</ol>
<p>c. AWS</p>
<ol>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-networking-aws-01-creating-virtual-private-cloud-vpc" target="_blank">Creating Virtual Private Cloud (VPC)</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-networking-aws-02-working-with-vpcs-and-subnets" target="_blank">Working with VPCs and Subnets</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-networking-aws-03-route-tables" target="_blank">Route Tables</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-networking-aws-04-network-acls" target="_blank">Network ACLs</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-networking-aws-05-security-groups" target="_blank">Security Groups</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-networking-aws-06-network-acls-vs-security-groups" target="_blank">Network ACLs vs Security Groups</a></li>
</ol>
</li>
</ol>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-index</link>
<pubDate>Thu, 14 May 2020 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Networking - Azure - 06 Using Service Tags to define rules in NSGs</title>
<description><![CDATA[<p>In the Network Security Groups (or NSGs), if you want to define rules related to Azure Services e.g. allowing communication on a particular port to Azure Storage account. Then you can do so for each and every IP address published by Microsoft for Azure Storage service. This is a very cumbersome method and is not practical. Also, there is a limit to the number of rules you can write and the number of NSGs you can have in a subscription. You can have up to 1000 rules per NSG. Also, you can have only a maximum of 200 NSGs in a subscription. You can find these limits here: <a href="https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/azure-subscription-service-limits" target="_blank">Azure subscription and service limits, quotas, and constraints.</a></p>
<h2>Solution</h2>
<p>Microsoft provides the capability to write NSG rules using Service Tag as a destination. When you leverage a Service tag the system ensures that it accounts for all the IP addresses for that service. Not only that, if any IP address changes or a new IP address is added then the Service Tag will account for that automatically. You can target whole services across different regions or services in a particular region. E.g. You can select Storage as a service or select &quot;Storage.EastUS&quot; to target the Storage service but only in the East US region.</p>
<h2>Where is the Option</h2>
<p>In the <strong>Inbound security rules</strong>, this option is available for the Source. In the <strong>Outbound security rules</strong>, this option is available for the Destination. Once you select the service tag as the option, then you are presented with a drop-down for the Service Tag selection. Internet is the default option for the Service Tag. You can type and search for the particular service tag e.g. Azure Storage, Azure Backup, etc.</p>
<img src="/images/16169643636060eb0bc8678.png" alt="Service Tag option">
<p>You can read more about Service Tags here: <a href="https://docs.microsoft.com/en-us/azure/virtual-network/service-tags-overview" target="_blank">Virtual network service tags</a>.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-networking-azure-06-using-service-tags-to-define-rules-in-nsgs</link>
<pubDate>Tue, 12 May 2020 00:00:00 +0500</pubDate>
</item>
<item>
<title>eBook - 6 Step Migration Strategy – Systematic Approach to Migrating your Workloads to Microsoft Azure</title>
<description><![CDATA[<p>I have authored this eBook titled &quot;<strong><em>6 Step Migration Strategy – Systematic Approach to Migrating your Workloads to Microsoft Azure</em></strong>&quot; which I am making available to everyone for <strong>free</strong>. It is a quick and short read that is meant to help set up the migration process for your department. Use this eBook as a blueprint to kickstart your migration to the cloud.</p>
<p>You can download the eBook by clicking on this link: <a href="https://HarvestingClouds.com/eBooks/6%20Step%20Migration%20Strategy%20%E2%80%93%20Systematic%20Approach%20to%20Migrating%20your%20Workloads%20to%20Microsoft%20Azure.pdf" target="_blank">6 Step Migration Strategy – Systematic Approach to Migrating your Workloads to Microsoft Azure</a></p>
<p><a href="https://HarvestingClouds.com/eBooks/6%20Step%20Migration%20Strategy%20%E2%80%93%20Systematic%20Approach%20to%20Migrating%20your%20Workloads%20to%20Microsoft%20Azure.pdf" target="_blank"></p>
<img src="https://HarvestingClouds.com/eBooks/Ebook2CoverPage.png" alt="6 Steps Migration Strategy Book" height="350" width="175">
<p></a></p>]]></description>
<link>https://HarvestingClouds.com/post/ebook-6-step-migration-strategy-systematic-approach-to-migrating-your-workloads-to-microsoft-azure</link>
<pubDate>Fri, 27 Mar 2020 00:00:00 +0500</pubDate>
</item>
<item>
<title>Data Factory Basics and Azure SQL Basics Series - Index</title>
<description><![CDATA[<p>This is the index for the Azure Data Factory and Azure SQL Basics. I received various requests to write posts around these Azure resources. This series is the practical and hands-on look at various features from creating the service to working with it in detail.</p>
<p>Below is the index for this series which is updated periodically.</p>
<h3>Azure Data Factory related posts</h3>
<ol>
<li><a href="https://HarvestingClouds.com/post/azure-data-factory-basics-creating-a-data-factory" target="_blank">Creating a Data Factory</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-data-factory-basics-exploring-the-azure-data-factory-layout" target="_blank">Exploring the Azure Data Factory layout</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-data-factory-basics-creation-of-azure-ssis-integration-runtime-ir-into-azure-data-factory" target="_blank">Creation of Azure SSIS Integration Runtime (IR) into Azure Data Factory</a></li>
</ol>
<h3>Azure SQL related posts</h3>
<ol>
<li><a href="https://HarvestingClouds.com/post/azure-sql-basics-finding-azure-sql-databases-in-the-azure-portal" target="_blank">Finding Azure SQL Databases in the Azure Portal</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-sql-basics-creating-azure-sql-databases" target="_blank">Creating Azure SQL Databases</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-sql-basics-connecting-to-azure-sql-db-using-ssms" target="_blank">Connecting to Azure SQL DB using SSMS</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-sql-basics-enabling-geo-replication-for-disaster-recovery" target="_blank">Enabling Geo-Replication for Disaster Recovery</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-sql-basics-data-masking-of-sensitive-information" target="_blank">Data Masking of sensitive information</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-sql-basics-exploring-the-new-query-editor-in-the-azure-portal" target="_blank">Exploring the new Query Editor in the Azure Portal</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-sql-basics-advanced-data-security" target="_blank">Advanced Data Security</a></li>
<li><a href="Query Performance Insights" target="_blank">Query Performance Insights</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-sql-basics-automatic-tuning-and-performance-recommendation" target="_blank">Automatic Tuning and Performance recommendation</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-sql-basics-creating-elastic-pool-in-azure-sql-server" target="_blank">Creating Elastic Pool in Azure SQL Server</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-sql-basics-azure-active-directory-admin-manage-azure-sql-via-ad-users" target="_blank">Azure Active Directory Admin - Manage Azure SQL via AD Users</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-sql-basics-managing-backups-for-azure-sql-database" target="_blank">Managing Backups for Azure SQL Database</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-sql-basics-restoring-azure-sql-database-from-the-backup" target="_blank">Restoring Azure SQL Database from the backup</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-sql-basics-setting-up-private-endpoints-on-an-azure-sql-server" target="_blank">Setting up Private Endpoints on an Azure SQL Server</a></li>
</ol>]]></description>
<link>https://HarvestingClouds.com/post/data-factory-basics-and-azure-sql-basics-series-index</link>
<pubDate>Sat, 22 Feb 2020 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure SQL Basics - Setting up Private Endpoints on an Azure SQL Server</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-sql-and-data-factory-basics-index" target="_blank">Azure SQL and Data Factory Basics - Index</a></b></p>
<p>A <strong>Private Endpoint</strong> is a fundamental block for a private link in Azure. It enables Azure resources, like Virtual Machines (VMs), to communicate privately with linked resources. Under the hood, it creates a network interface card (NIC) on the Azure SQL Server and attaches that to your Virtual network. That way you get a private IP address assigned to your Azure SQL Server. Now from any other resource on this network, you can securely access the Azure SQL Server without the communication leaving the network at all. </p>
<p>Microsoft also goes one step further by creating a <strong>Private DNS integration</strong>. It leverages a private DNS zone to provide a DNS name for your Azure SQL Database Server which is mapped to the Private IP address (instead of the usual Public). </p>
<p>In this post, you will be setting up the private endpoints to the SQL server through Azure Portal. Then, you can securely access the Azure SQL Database Server from the VM.</p>
<h3>Create a private endpoint</h3>
<p>In this section, you will create a private endpoint to it.</p>
<p>On the upper-left side of the screen in the Azure portal, select <em>Create a resource &gt; Private Link Center (Preview)</em>.</p>
<img src="/images/15846633065e740b0ad1dc6.png" alt="Private Link Center">
<p>In Private Link Center - Overview, on the option select Private Endpoints and click Add. </p>
<img src="/images/15846633135e740b11c2d24.png" alt="Adding new Private Endpoint">
<p>Enter or select this information:</p>
<ul>
<li><strong>Subscription</strong> - Select your subscription.</li>
<li><strong>Resource group</strong> - Select myResourceGroup. You created this in the previous section.</li>
</ul>
<p>Under instance details provide:</p>
<ul>
<li><strong>Name</strong> - Enter myPrivateEndpoint. If this name is taken, create a unique name.</li>
<li><strong>Region</strong> - Select the geo region where you want to deploy the underlying resources</li>
</ul>
<img src="/images/15846633635e740b43efd9a.png" alt="Creating a private endpoint wizard">
<p>In the next screen for &quot;Create a private endpoint - Resource&quot;, enter or select this information:</p>
<ul>
<li><strong>Connection method</strong> - Select the radio button for &quot;<em>Connect to an Azure resource in my directory</em>&quot;</li>
<li><strong>Subscription</strong> - Select your subscription.</li>
<li><strong>Resource type</strong> - Select Microsoft.Sql/servers.</li>
<li><strong>Resource</strong> - Select your server</li>
<li><strong>Target sub-resource</strong> - Select sqlServer</li>
</ul>
<p>Click next once done. </p>
<img src="/images/15846633725e740b4c2d6b4.png" alt="Adding resources">
<p>In <strong>Create a private endpoint (Preview) - Configuration</strong>, enter or select this information:</p>
<p>Under the networking section, provide:</p>
<ul>
<li><strong>Virtual network</strong> - Select MyVirtualNetwork.</li>
<li><strong>Subnet</strong> - Select mySubnet.</li>
</ul>
<p>Under the Private DNS integration, provide:</p>
<ul>
<li><strong>Integrate with private DNS zone</strong> - Select Yes.</li>
<li><strong>Private DNS Zone</strong> - Select (New)privatelink.database.windows.net</li>
</ul>
<img src="/images/15846633795e740b53043e2.png" alt="Networking and DNS integration configurations">
<p>Select Review + create. You're taken to the Review + create page where Azure validates your configuration. When you see the &quot;Validation passed&quot; message, select Create.</p>
<img src="/images/15846634125e740b746581c.png" alt="Review and Create">
<p>Once the private endpoint is created your Azure SQL Server is ready to be connected via it's new Private IP address. This address will be in the Virtual network you connected it to. There will also be a network interface card (NIC) resource that actually links the Azure SQL Server to the Virtual Network.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-sql-basics-setting-up-private-endpoints-on-an-azure-sql-server</link>
<pubDate>Thu, 20 Feb 2020 00:00:00 +0500</pubDate>
</item>
<item>
<title>Troubleshooting Azure Networking - Using Network Watcher</title>
<description><![CDATA[<p><strong>Network Watcher</strong> is like a swiss knife for various things related to networking. This is a one-stop-shop for monitoring and troubleshooting your networks and related components. In this post, we will be focusing on the troubleshooting related aspects of Network Watcher.</p>
<h3>Enabling Network Watcher</h3>
<p>First of all, you need to enable network watcher. You should do so for every region where you have a virtual network deployed or will deploy in the near future. To do so simply navigate to the Network Watcher service (by searching for it). In the Overview screen, expand the regions area as shown below. Right-click on the region for which you want to enable this service and select &quot;<em>Enable network watcher</em>&quot;. </p>
<p>The first time you do this, it will create a new resource group called &quot;NetworkWatcherRG&quot;. If you check this resource group after enabling network watcher for any region, it will look empty, but it actually contains hidden resources of type &quot;<em>microsoft.network/networkwatchers</em>&quot;.</p>
<img src="/images/15851936465e7c22ae8edbe.png" alt="Enabling Network Watcher">
<p>Once the network watcher is enabled, you are ready for consuming various mini tools (or different sections) that give you troubleshooting features.</p>
<h3>IP Flow Verify</h3>
<p>This is to check the IP Flow. If you want to check if the traffic is able to flow or is being blocked then this is the section you want to check. Select the VM and its network interface for which you want to check the IP Flow. The Local IP address will be auto-populated. Fill in the details for the Remote IP address and Remote port and click on the Check button. It will simulate the traffic and will let you know the results. </p>
<img src="/images/15851936555e7c22b7e441a.png" alt="IP Flow Verify">
<h3>Next Hop</h3>
<p>If you have various Route tables and you want to verify that the traffic follows a particular route or not then &quot;Next Hop&quot; is the section you want to check. You can check if traffic from a source to destination will go via network virtual appliance or not based on a route defined in a route table or not. </p>
<p>Simply select the source and destination. The Source IP address will populate based on the network interface selected. And the destination IP address will be input by you. Click on the &quot;Next hop&quot; button once you are ready. The output will show you what the next hop will be and the route along with the route table that is directing that traffic to that next hop.</p>
<img src="/images/15851936625e7c22be7dc81.png" alt="Next Hop">
<h3>Effective Security Rules</h3>
<p>You may have various Network Security Groups (NSGs) applied to a VM i.e. one NSG applied directly on its network interface and another one applied at the subnet level. If you want to check what is the result set of these NSGs and which NSGs are effectively applied on your VM, then this is the section that you can use. Simply select your Virtual Machine (VM) and the effective security rules will be displayed in the bottom section. </p>
<img src="/images/15851936705e7c22c6906dc.png" alt="Effective Security Rules">
<p>For more information please check this link: <a href="https://docs.microsoft.com/en-us/azure/network-watcher/network-watcher-monitoring-overview" target="_blank">Azure Network Watcher</a></p>]]></description>
<link>https://HarvestingClouds.com/post/troubleshooting-azure-networking-using-network-watcher</link>
<pubDate>Tue, 11 Feb 2020 00:00:00 +0500</pubDate>
</item>
<item>
<title>Troubleshooting Azure Networking - Checking Allowed and Denied Traffic in Network Security Groups (NSGs) via Log Analytics Queries</title>
<description><![CDATA[<p>In the last post, we set up the NSG Flow Logs to be sent to the Log Analytics workspace. In this post, we will run Log queries on this workspace to check the traffic data. We can easily see allowed vs denied traffic on the NSGs leveraging these queries.</p>
<p>To start first navigate to the Log Analytics workspaces. Click on the workspace which is the target for NSG Flow Logs in your Network Security Groups (NSGs). Within this workspace, click on the <strong>Logs</strong> section. If you are opening this for the first time, you will see a &quot;Getting Started&quot; button. Click on that and go through the tutorial if you want. </p>
<p>Once the Logs section is opened up (as shown below), you can type the queries we will discuss next in the main area (marked number 3 below). The results will appear in the bottom section. You can use the time filters (also shown below as number 2) to filter the data to a specific date and time range.</p>
<img src="/images/15851741605e7bd69073d9a.png" alt="Log Analytics - Checking Logs">
<p>All the logs from NSG Flow Logs are sent to the &quot;<strong>AzureNetworkAnalytics_CL</strong>&quot; table in the Log Analytics. You can start querying this table for the data. Below are various queries that have helped me a lot in troubleshooting NSGs. </p>
<p><strong>NOTE</strong>: In the below examples, &quot;10.20.&quot; uniquely identify my virtual network. You will need to tweak this value as per the IP address space of your virtual network. You can even specify multiple address spaces for both sources and destinations. </p>
<h3>Checking all Denied Traffic</h3>
<pre><code>AzureNetworkAnalytics_CL| extend NSGRuleAction=split(NSGRules_s,'|',3)[0]| extend NSGRuleName=tostring(split(NSGRules_s,'|',1)[0])| where NSGRuleAction == "D" | summarize count() by VM_s,VMIP_s,SrcIP_s,DestIP_s,DestPort_d</code></pre>
<p>To extend the above query, we can also check the denied Traffic with Time Generated, NSG Name and Rule Name, Subnet Names and more info:</p>
<pre><code>AzureNetworkAnalytics_CL| extend NSGRuleAction=split(NSGRules_s,'|',3)[0]| extend NSGRuleName=tostring(split(NSGRules_s,'|',1)[0])| extend NSGName=tostring(split(NSGList_s,'/',2)[0])| where NSGRuleAction == "D" | summarize count() by SourceIP=SrcIP_s, DestinationIP=DestIP_s, DestinationPort=DestPort_d, TimeGenerated, NSGName, NSGRuleName, SourceSubnet=Subnet1_s, DestinationSubnet=Subnet2_s</code></pre>
<h3>Intra vNet allowed traffic for a specific rule</h3>
<pre><code>AzureNetworkAnalytics_CL| extend NSGRuleAction=split(NSGRules_s,'|',3)[0]| extend NSGRuleName=tostring(split(NSGRules_s,'|',1)[0])
| extend NSGName=tostring(split(NSGList_s,'/',2)[0])
| where NSGRuleAction == "A"
| where NSGRuleName == "privateipinbound-allow" 
| where SrcIP_s contains "10.20."
| where DestIP_s contains "10.24."
| summarize count() by SourceIP=SrcIP_s, DestinationIP=DestIP_s, DestinationPort=DestPort_d, TimeGenerated, NSGName, NSGRuleName, SourceSubnet=Subnet1_s,     DestinationSubnet=Subnet2_s, L4Protocol_s</code></pre>
<h3>Query to review specific rule for specific IP and only Allowed traffic</h3>
<pre><code>AzureNetworkAnalytics_CL| extend NSGRuleAction=split(NSGRules_s,'|',3)[0]| extend NSGRuleName=tostring(split(NSGRules_s,'|',1)[0])| where NSGRuleAction == "A"
| where NSGRuleName == "temp-allowall-inbound" 
| where SrcIP_s contains "10.20." </code></pre>
<h3>Intra vNet allowed traffic for a specific rule</h3>
<pre><code>AzureNetworkAnalytics_CL| extend NSGRuleAction=split(NSGRules_s,'|',3)[0]| extend NSGRuleName=tostring(split(NSGRules_s,'|',1)[0])
| extend NSGName=tostring(split(NSGList_s,'/',2)[0])
| where NSGRuleAction == "A"
| where NSGRuleName == "privateipinbound-allow" 
| where SrcIP_s contains "10.20."
| where DestIP_s contains "10.20."
| summarize count() by SourceIP=SrcIP_s, DestinationIP=DestIP_s, DestinationPort=DestPort_d, TimeGenerated, NSGName, NSGRuleName, SourceSubnet=Subnet1_s, DestinationSubnet=Subnet2_s, L4Protocol_s</code></pre>
<h3>Intra vNet allowed traffic for a specific NSG</h3>
<pre><code>AzureNetworkAnalytics_CL| extend NSGRuleAction=split(NSGRules_s,'|',3)[0]| extend NSGRuleName=tostring(split(NSGRules_s,'|',1)[0])
| extend NSGName=tostring(split(NSGList_s,'/',2)[0])
| where NSGRuleAction == "A"
| where NSGName contains "systemcenter"
| where SrcIP_s contains "10.20."
| where DestIP_s contains "10.20."
| summarize count() by DestinationPort=DestPort_d, NSGName, NSGRuleName, SourceSubnet=Subnet1_s, DestinationSubnet=Subnet2_s, L4Protocol_s, TimeGenerated</code></pre>
<h3>Intra vNet allowed traffic for a specific NSG for Inbound Traffic Only</h3>
<pre><code>AzureNetworkAnalytics_CL| extend NSGRuleAction=split(NSGRules_s,'|',3)[0]| extend NSGRuleName=tostring(split(NSGRules_s,'|',1)[0])
| extend NSGName=tostring(split(NSGList_s,'/',2)[0])
| where NSGRuleAction == "A"
| where NSGName contains "systemcenter"
| where SrcIP_s contains "10.20."
| where DestIP_s contains "10.20."
| where FlowDirection_s == "I"
| summarize count() by DestinationPort=DestPort_d, NSGName, NSGRuleName, SourceSubnet=Subnet1_s, DestinationSubnet=Subnet2_s, L4Protocol_s, TimeGenerated</code></pre>
<p>Please note that the above queries are only to help you kick start the troubleshooting. These should be used as a reference. Note how the properties are expanded and details extracted (using split and extend functions). If you want me to explain any of these queries in detail, please mention so in the comments below and I can provide details.</p>]]></description>
<link>https://HarvestingClouds.com/post/troubleshooting-azure-networking-checking-allowed-and-denied-traffic-in-network-security-groups-nsgs-via-log-analytics-queries</link>
<pubDate>Tue, 04 Feb 2020 00:00:00 +0500</pubDate>
</item>
<item>
<title>Troubleshooting Azure Networking - Setting up Flow Logs Monitoring on Network Security Groups (NSGs)</title>
<description><![CDATA[<p>To be able to troubleshoot traffic being allowed or blocked on the <strong>Network Security Group</strong> (NSGs), Flow Logs should be enabled and should be sent to a Storage Account and Log Analytics, etc. Setting this up is very easy. This needs to be set up on each of the NSG in your environment. </p>
<p><strong>Note</strong> that the <strong>Network Watcher</strong> is a pre-requisite for this. It will be auto-enabled for the region of the NSG when the Flow Logs is set up.</p>
<p>To start, navigate to the Network Security Groups in the Azure Portal. Select the NSG for which you want to enable the Flow Logs. Scroll all the way down in the settings and select the &quot;<strong>NSG Flow Logs</strong>&quot; setting. Click on the Flow Logs in the middle area to open up the settings for it.</p>
<img src="/images/15851646075e7bb13f901c3.png" alt="NSG Flow Logs optoin">
<p>Flow logs status should be turned On to be able to log all the flow logs. <strong>Version 2</strong> has more information for throughput etc. Click on the Storage option to configure the storage account to be used to export the flow logs.</p>
<img src="/images/15851646275e7bb1539e826.png" alt="Enabling logging to storage">
<p>Scroll down further to check the Log Analytics workspace settings. Turn on the <strong>traffic analysis status</strong>. For better and detailed logging set the &quot;Traffic Analysis processing interval&quot; to &quot;Every 10 mins&quot; instead of every 1 hour. Finally, select the Log Analytics workspace where you want the logs to be sent. Try to be uniform and select the same workspace for all NSG flow logs. </p>
<img src="/images/15851646355e7bb15bb6704.png" alt="Enabling logging to Log Analytics workspace">
<p>Now the flow logs will automatically capture and you are ready to start troubleshooting the traffic. We will check how to do this and common queries that you can reuse in the next few blog posts.</p>]]></description>
<link>https://HarvestingClouds.com/post/troubleshooting-azure-networking-setting-up-flow-logs-monitoring-on-network-security-groups-nsgs</link>
<pubDate>Mon, 03 Feb 2020 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Storage - Azure - 06 Geo Replication on Storage accounts</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>Based on the replication strategy selected on your storage accounts, you can leverage the geo-replication option to restore your storage accounts to a secondary location. Azure Storage replication copies your data so that it is protected from transient hardware failures, network or power outages, and natural disasters. If an outage renders the primary endpoint unavailable, then you can initiate a failover to the secondary endpoint to rapidly restore write access to your data.</p>
<p>You can access this option by navigating to your storage account and clicking on the &quot;<strong>Geo-replication</strong>&quot; option. </p>
<img src="/images/15848219915e7676e74b844.png" alt="Geo replication">
<p>This option shows you the replication you selected for your storage account while creating it. By clicking on the Storage endpoints option, you can see what the endpoints will look like if you failover to the secondary region. You copy and can keep these handy to update your application using these endpoints. </p>
<p>Scroll down to view the primary and secondary regions and the button to prepare for the failover. </p>
<img src="/images/15848219985e7676ee8f43f.png" alt="preparing for failover">
<p>When you click on this button, a new popup will open and will give you the option to failover. The secondary region will become primary when you do. </p>
<p>This option is very useful to perform any disaster recovery if there are any issues in your primary region. </p>
<p>For more information click here: <a href="https://docs.microsoft.com/en-us/azure/storage/common/storage-disaster-recovery-guidance" target="_blank">Disaster recovery and account failover</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-storage-azure-06-geo-replication-on-storage-accounts</link>
<pubDate>Tue, 21 Jan 2020 00:00:00 +0500</pubDate>
</item>
<item>
<title>Leveraging Service Tags to define better rules in Network Security Groups</title>
<description><![CDATA[<p><strong>Service Tags</strong> comes in very handy when you want to specify Sources for Inbound rules and Destination for Outbound rules. It makes your life a lot easier to define a complete service without specifying each and every IP address in that service. </p>
<p>E.g. you can specify a complete Virtual Network or Microsoft Azure Backup as a source for Inbound rules very easily by simply leveraging Service tags. Even if the address space of that virtual network changes in the future, you will not need to alter its related Network Security Group (NSG) rules.</p>
<p>Simply navigate to any NSG and then go to Inbound rules (or Outbound rules). Edit one of the existing rules or create a new one. For Souce in case of inbound rules (and destination in case of outbound rules), select &quot;Service Tag&quot; from the drop-down. From the drop-down for the &quot;Source service tag&quot; select the service tag for the service, for which you need the rule.</p>
<img src="/images/15851213745e7b085e3ba29.png" alt="Service Tags">
<p><strong>Note</strong> from the arrow in the above screenshot that the vertical scrollbar is huge. Since this feature was launched, so many services have been configured as Service Tags and have made administrators' life easier in terms of configurations required.</p>
<p>For more information please check this link: <a href="https://docs.microsoft.com/en-us/azure/virtual-network/service-tags-overview" target="_blank">Virtual network service tags</a></p>]]></description>
<link>https://HarvestingClouds.com/post/leveraging-service-tags-to-define-better-rules-in-network-security-groups</link>
<pubDate>Sun, 12 Jan 2020 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Data Factory Basics - Creation of Azure SSIS Integration Runtime (IR) into Azure Data Factory</title>
<description><![CDATA[<p>To run/migrate the SSIS packages to Azure Data factory, we need the <strong>Azure SSIS IR</strong> (<strong>Integration Runtime</strong>) in the <strong>Azure Data Factory</strong>. Azure-SSIS IR is the runtime environment that runs SSIS packages on Azure. </p>
<p>An Azure-SSIS IR provisions:</p>
<ul>
<li>Running packages deployed into the SSIS catalog (SSISDB) hosted by an Azure SQL Database server or a managed instance (Project Deployment Model).</li>
<li>Running packages deployed into file systems, file shares, or Azure Files (Package Deployment Model).</li>
</ul>
<p>You can create the self-hosted Azure-SSIS IR with your desired configuration or you can use the run time integration used by the Azure.</p>
<img src="/images/15851224855e7b0cb528a7d.png" alt="Default IR">
<p>Let’s create the Azure-SSIS IR as per below steps:</p>
<p>1.) Go to data factory created in Azure Portal and Select the Author &amp; Monitor tile to open its Let's get started a page on a separate tab. There, you can continue to create your Azure-SSIS IR.</p>
<img src="/images/15851224935e7b0cbd8bce6.png" alt="Configure SSIS Integrations">
<p>2.)    After clicking on Configure SSIS Integration, it will open the integration runtime setup window. It requires as following parameters:</p>
<ol>
<li>Name – Provide the appropriate Name to your integration runtime</li>
<li>Description – it’s not a mandatory parameter to provide</li>
<li>Type is the default setting. You can’t change this one.</li>
<li>Set the location</li>
<li>Node size – Select the node size for your integration runtime cluster. Choose the higher number if you want to run packages that require more memory/computing power.</li>
<li>Node Number – Choose the number from the list. Higher number if you want to run the many packages in parallel.</li>
<li>Edition/license – Select the SQL server edition of your integration runtime. Select the enterprise if you want to choose the advanced features of integration runtime.</li>
<li>Save money - select the Azure Hybrid Benefit option for your integration runtime: Yes or No. Select Yes if you want to bring your SQL Server license with Software Assurance to benefit from cost savings with hybrid use. </li>
</ol>
<p>Next is the Integration runtime setup general settings.</p>
<img src="/images/15851225005e7b0cc465fe3.png" alt="IR Setup 1 - General settings">
<p>Next up we have the Integration runtime setup SQL settings.</p>
<img src="/images/15851225075e7b0ccb8236c.png" alt="IR Setup 2 - SQL Settings">
<p>Next up we have the Integration runtime setup Advanced settings.</p>
<img src="/images/15851225735e7b0d0d5207f.png" alt="IR Setup 3 - Advanced settings">
<p>Finally, we have a summary for the IR setup.</p>
<p>&lt;img src=&quot;/images/15851225805e7b0d1475046.png&quot; alt=&quot;IR Setup 4&quot; - Summary&gt;</p>
<p>As you can see below in the screenshot, the self-hosted Azure-SSIS IR has been created.</p>
<img src="/images/15851225875e7b0d1bf3edb.png" alt="Created IR for SSIS">
<p>Alternatively, to create the Azure-SSIS IR on Azure Data Factory UI portal go to Edit tab -&gt; Connections -&gt; Integration runtimes and click New. </p>
<img src="/images/15851225955e7b0d236f516.png" alt="New IR">
<p>In the Integration Runtime Setup panel, select the Lift-and-shift existing SSIS packages to execute in Azure tile, and then select Next and proceed to create IR setup from here, similar to what we did earlier.</p>
<img src="/images/15851226245e7b0d406364c.png" alt="IR Setup">
<p>Now once we have the Integration Runtime we can start importing and running SSIS packages. </p>]]></description>
<link>https://HarvestingClouds.com/post/azure-data-factory-basics-creation-of-azure-ssis-integration-runtime-ir-into-azure-data-factory</link>
<pubDate>Fri, 10 Jan 2020 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Data Factory Basics - Exploring the Azure Data Factory layout</title>
<description><![CDATA[<p>In this post, we will be looking at the Data factories section in Azure in detail. </p>
<p>To manage the data factory pipelines, go to All resources -&gt; Data factories. You will see all the data factories have been added or created. Select the data factory you want to manage.</p>
<img src="/images/15851220815e7b0b21ae939.png" alt="Data factories">
<p>Once you select the data factory, it will open the new window (known as a blade in Azure) where you can manage your Data Factory.</p>
<img src="/images/15851220875e7b0b27f36e5.png" alt="Data factory details">
<p>Let’s discuss a few sections of the Data Factory page.</p>
<h3>Section 1: General properties</h3>
<ol>
<li>Overview: The first page when you land on this window is the overview page. </li>
<li>Activity log: Platform logs provide detailed diagnostic and auditing information for Azure resources and the Azure platform they depend on. They are automatically generated although you need to configure certain platform logs to be forwarded to one or more destinations to be retained.</li>
<li>Access Control (IAM): Review the level of access a user, group, service principal, or managed identity has to this resource.</li>
<li>Tags: This is used to provide some keywords to your resource.</li>
<li>Diagnostic and solve problems: This provides you the list of troubleshooting links for the known issues a user can come across.</li>
</ol>
<h3>Section 2 Getting Started</h3>
<p>Quick Start: This provide the basic tutorial/videos link on how to work with the Data factory. They have been provided by Microsoft and you can see the documentation about the subject. It is really helpful as it gives a quick understanding of the topic.</p>
<h3>Section 3 Monitoring</h3>
<ol>
<li>Alerts: Configure alert rules and attend to fired alerts to efficiently monitor your Azure resources.</li>
<li>Metrics: Configure alert rules and attend to fired alerts to efficiently monitor yourAzure resources. For more details, please refer to the Microsoft official documentation for Metrics
at <a href="https://docs.microsoft.com/en-us/azure/azure-monitor/platform/metrics-getting-started">https://docs.microsoft.com/en-us/azure/azure-monitor/platform/metrics-getting-started</a></li>
<li>Diagnostic settings: They are used to collect platform logs and metrics in Azure. To get more information on how to create the diagnostic settings using Azure Portal/CLI/Powershell, please refer to the official Microsoft documentation at <a href="https://docs.microsoft.com/en-us/azure/azure-monitor/platform/diagnostic-settings">https://docs.microsoft.com/en-us/azure/azure-monitor/platform/diagnostic-settings</a></li>
</ol>
<h3>Section 4 Author &amp; Monitor</h3>
<p>Author and Monitor link is the most important link which takes you to the authoring console. This console opens up the new window. This is where you can create Azure-SSIS IR, pipelines, linked services, data sets, and data flow, etc.
I will be explaining this more in the upcoming posts.</p>
<h3>Section 5 Monitoring</h3>
<p>This section is the summarization of the activities happening in the Azure Data Factory. Pipelines status whether it was successful or failed. TriggerRun information that when the pipelines ran. You can also check the CPU and Memory utilization of the IR. You can view or set up all these monitoring status on the Metrics mentioned in section 3.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-data-factory-basics-exploring-the-azure-data-factory-layout</link>
<pubDate>Wed, 08 Jan 2020 00:00:00 +0500</pubDate>
</item>
<item>
<title>Expose an Azure Function App via API Management</title>
<description><![CDATA[<p>You can expose an <strong>Azure Function App</strong> via <strong>API Management</strong> directly from within an Azure Function app. This functionality is provided out of the box and all you need to do is, configure the link between an Azure Function App and an existing (or new) API Management App.</p>
<p>To start, navigate to your Azure Function and click on the &quot;<em>Platform Features</em>&quot; tab. Here you will see the option for &quot;<em>API Management</em>&quot; as shown below.</p>
<img src="/images/15851181855e7afbe967613.png" alt="Option for API Management">
<p>Once you click on API Management, it takes you to a new blade. Here you can simply link your Function app to an existing API Management or even create a new one and then link the app. </p>
<img src="/images/15851181975e7afbf58e686.png" alt="Linking API Management">
<p>Now that the app is linked, you can now manage API operations, apply policies, edit and download OpenAPI specification files, or navigate to API Management instance for a full-featured experience.</p>
<p>For more information please check this link: <a href="https://docs.microsoft.com/en-us/azure/azure-functions/functions-openapi-definition" target="_blank">Create an OpenAPI definition for a serverless API using Azure API Management</a></p>]]></description>
<link>https://HarvestingClouds.com/post/expose-an-azure-function-app-via-api-management</link>
<pubDate>Tue, 07 Jan 2020 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Data Factory Basics - Creating a Data Factory</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-sql-and-data-factory-basics-index" target="_blank">Azure SQL and Data Factory Basics - Index</a></b></p>
<p><strong>Azure Data Factory</strong> is a cloud-based <strong>ETL service</strong> to run the SSIS packages. You can lift and shift the existing SSIS packages in Azure and run them fully in ADF. It is code-free UI to run, monitor and manage the packages.</p>
<p>Using the Azure Data Factory, you can create and schedule data-driven workflows (called pipelines) that can ingest data from disparate data stores.</p>
<p>To start, navigate to All services - &gt; Analytics -&gt; Data Factories.</p>
<img src="/images/15850874675e7a83ebb9b2a.png" alt="Data Factories Option">
<p>Click Add or Create Data Factory as highlighted below.</p>
<img src="/images/15850874745e7a83f2bc355.png" alt="Adding new Data Factory">
<p>Provide the following details in the below screen:</p>
<ol>
<li>Appropriate name to your data factory</li>
<li>Select the v2 version. This is the latest version which includes the latest features of Azure data factory</li>
<li>The next one is to select the subscription and resource group. Here you can create a new resource group in case you don’t want to use the existing resource group.</li>
<li>Select the Location </li>
<li>Enable GIT: If you have checked this option, you will need to provide the additional details about the GIT URL, repo URL, Branch Name and Root URL.</li>
</ol>
<p>Click Create. The deployment will be done in a couple of mins or so to get this done.</p>
<img src="/images/15850874815e7a83f99e1af.png" alt="Basic information">
<p>You can see we have created the Data Factory. Now you are ready to create the workflow pipelines for this Azure data factory.</p>
<img src="/images/15850874905e7a84028c681.png" alt="Data Factory Resource">
<p>I will be explaining about its monitoring section and how to create pipelines in my next blogs. So keep tuning in!</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-data-factory-basics-creating-a-data-factory</link>
<pubDate>Sat, 04 Jan 2020 00:00:00 +0500</pubDate>
</item>
<item>
<title>Managing cost for both Microsoft Azure and AWS from within Azure</title>
<description><![CDATA[<p>If you are leveraging both Microsoft Azure and Amazon's AWS for the cloud, you now have <strong>AWS Cloud connector</strong> to view your AWS <strong>billing data</strong>, directly from within the Microsoft Azure portal. Let's check out how to set this up.</p>
<p>Start by navigating to the &quot;<strong><em>Cost Management + Billing</em></strong>&quot; section. Click on the &quot;<strong>Cost Management</strong>&quot; management option. In the new blade, click on the &quot;Connectors for AWS (Preview)&quot; option under the settings menu. You will see the option to configure the sync from AWS to Azure here by clicking on the &quot;<strong><em>Add connector</em></strong>&quot; button.</p>
<img src="/images/15851189455e7afee1f0d49.png" alt="The option for Connection">
<p>Under basic settings, just provide a display name, management group and a subscription for the connector. </p>
<img src="/images/15851189575e7afeed0fcc1.png" alt="Creating Connector - Basics">
<p>In the &quot;AWS properties&quot; section provide the value for:</p>
<ul>
<li>Role ARN</li>
<li>External ID</li>
<li>Report Name</li>
</ul>
<p>You need to take some steps in AWS to get these values. These steps are described in detail here: <a href="https://docs.microsoft.com/en-us/azure/cost-management-billing/costs/aws-integration-set-up-configure#create-a-role-and-policy-in-aws" target="_blank">Create a role and policy in AWS for Billing Connector in Azure</a></p>
<img src="/images/15851189655e7afef56cc06.png" alt="Creating Connector - AWS Properties">
<p>Finally, review all the settings and create the connector. Now you can view all the cost and billing data for both Microsoft Azure and Amazon's AWS in one place.</p>
<p>For more information please check this link: <a href="https://docs.microsoft.com/en-us/azure/cost-management-billing/costs/aws-integration-set-up-configure" target="_blank">For more information please check this link: <a href="" target="_blank"></a></a></p>]]></description>
<link>https://HarvestingClouds.com/post/managing-cost-for-both-microsoft-azure-and-aws-from-within-azure</link>
<pubDate>Wed, 18 Dec 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure SQL Basics - Restoring Azure SQL Database from the backup</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-sql-and-data-factory-basics-index" target="_blank">Azure SQL and Data Factory Basics - Index</a></b></p>
<p>From time to time, you will need to restore your database from an earlier backup. In this post, we will be discussing the restoring the Azure SQL database from the backup file.</p>
<p>To restore the database, go to the Azure SQL database you want to restore. Once you select the SQL database, you can click on Restore as marked in the screenshot.</p>
<img src="/images/15846591425e73fac672e6d.png" alt="Restore Option">
<p>It will open the options to select the backfile for restore as shown below.</p>
<img src="/images/15846591495e73facdc658b.png" alt="Settings to restore database backup">
<p>The first dropdown is to select the type of backup. You can either select Point-in-time or the LTR (Long Term backup retention) to select the file. When you select the Point-in-time, it shows the earliest date when the restore is taken. It is the first time the Azure has taken the back up of your SQL database. Database Name would be the name by which the database will be restored at the server. You can change the name as per your requirement.</p>
<p>The next one is the Restore point selection, it will show the dates highlighted for which backup is available. </p>
<p>Then it will ask on which server you want to restore the backup. Once selected, then you can set up the tiered pricing for the restored SQL database.  </p>
<p>That's it! You are all set now. Hit that button and restore your database.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-sql-basics-restoring-azure-sql-database-from-the-backup</link>
<pubDate>Mon, 02 Dec 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure SQL Basics - Managing Backups for Azure SQL Database</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-sql-and-data-factory-basics-index" target="_blank">Azure SQL and Data Factory Basics - Index</a></b></p>
<p>In this post, we will discuss managing the backup for the Azure SQL database. Azure takes the back up at the most convenient time when the database is not much in use. We can’t control at what time the back up took but we can set up the days/week/years for how long we want to keep the backup files.</p>
<p>To manage the backup setting, go to the <strong>Azure SQL Server</strong> under which our Azure SQL Database has been created. Navigate to the &quot;<em>Manage Backups</em>&quot; option under the Settings.
Select the Azure SQL database from the list. Once you select the Database, it will enable the &quot;Configure retention&quot; button.</p>
<img src="/images/15845865455e72df31cd825.png" alt="Managing Backups">
<p>The configurations have mainly four types of retentions configurations settings.</p>
<ol>
<li><strong>PITR – Point in time restore</strong> configuration – This allows you to set the number of days you want to keep the backup. All Azure SQL databases (single, pooled, and managed instance databases) have a default backup retention period of seven days. You can change it to 14/21/28/35 days.</li>
<li>
<p><strong>Long-Term Retention Configurations</strong></p>
<ul>
<li><strong>Weekly LTR backups</strong> – You can specify the number between 1 and 520. This is for how many weeks you want to retains the backup.</li>
<li><strong>Monthly LTR backups</strong> – You must specify the number between 1 and 120 month(s) and select the Months from the dropdown.</li>
</ul>
</li>
</ol>
<img src="/images/15845865535e72df397b04a.png" alt="Configuring Retention policies">]]></description>
<link>https://HarvestingClouds.com/post/azure-sql-basics-managing-backups-for-azure-sql-database</link>
<pubDate>Fri, 29 Nov 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure SQL Basics - Azure Active Directory Admin - manage Azure SQL via AD Users</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-sql-and-data-factory-basics-index" target="_blank">Azure SQL and Data Factory Basics - Index</a></b></p>
<p><strong>Azure active directory admin</strong> allows managing the other active directory user to manage the Azure SQL database. SSMS provides an option to connect with the server using the &quot;<em>Azure active directory authentication</em>&quot;. To leverage this functionality, the Azure active directory admin has to provide access to the user.</p>
<p>In this post, we are looking at this option and checking the way to add the Azure Active directory admin through Azure Portal.</p>
<p>Go to All resources-&gt; SQL server -&gt; Settings-&gt; Active Directory Admin-&gt; Set Admin</p>
<img src="/images/15845861825e72ddc67f2d8.png" alt="Set AD Admin option">
<p>Add the user from the select dropdown. Once selected, you will be able to see the added user on the home page of active directory admin.</p>
<img src="/images/15845861945e72ddd2511bd.png" alt="Add Admin wizard">
<p>Now you are ready to have this AD user manage the database. Now this user can add other AD users to the database (instead of simple SQL users)</p>
<p>To get more details about this, please refer to the Microsoft official link <a href="https://docs.microsoft.com/en-us/azure/sql-database/sql-database-aad-authentication" target="_blank">Use Azure Active Directory Authentication for authentication with SQL</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-sql-basics-azure-active-directory-admin-manage-azure-sql-via-ad-users</link>
<pubDate>Wed, 27 Nov 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure SQL Basics - Creating Elastic Pool in Azure SQL Server</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-sql-and-data-factory-basics-index" target="_blank">Azure SQL and Data Factory Basics - Index</a></b></p>
<p>Azure SQL Database <strong>Elastic Pools</strong> are a simple, cost-effective solution for managing and scaling multiple databases that have varying and unpredictable usage demands. The databases in an elastic pool are on a single Azure SQL Database server and share a set number of resources at a set price.</p>
<p>Elastic Pools exist only at the SQL Server level in Azure. These help to pool resources when you are building multiple databases on the same server. In this post, we will be checking how to create the elastic pool in the <strong>Azure SQL Server</strong>.</p>
<p>To Navigate to the creation of an elastic pool, go to the server under which you want to create the elastic pool. </p>
<img src="/images/15845857295e72dc01424da.png" alt="Option for New Elastic Pool">
<p>To create the elastic pool, mention the valid name in the Elastic pool name section.
To select the compute and storage, click on the configure elastic pool and it will show below the blade.</p>
<img src="/images/15845857355e72dc07bc483.png" alt="Basic settings for new Elastic Pool creation">
<p>Here you assign the total number of DTU’s and max database size of all the databases in the elastic pool.</p>
<img src="/images/15845857475e72dc1334a17.png" alt="Standard configurations">
<p>In the Databases section, you can add the databases which you want to add into this elastic pool that we are creating.</p>
<img src="/images/15845857535e72dc1985b1e.png" alt="Adding Databases to the Elastic Pool">
<p>In the ‘Per database settings’ section, you can provide the min and max DTU assigned to each database added in the elastic pool.</p>
<img src="/images/15845857635e72dc23df3bd.png" alt="Per database settings">
<p>You can also apply Tags during the creation (which is highly recommended). Finally, you review and create the resources. The deployment will be submitted and the pool created for you. </p>]]></description>
<link>https://HarvestingClouds.com/post/azure-sql-basics-creating-elastic-pool-in-azure-sql-server</link>
<pubDate>Sat, 23 Nov 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure SQL Basics - Automatic Tuning and Performance recommendation</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-sql-and-data-factory-basics-index" target="_blank">Azure SQL and Data Factory Basics - Index</a></b></p>
<p>In this post, we will be discussing the performance recommendation and automatic tuning provided by Azure.   To make your SQL database perform better we need some insights to get better recommendations and it is provided by default by Azure.</p>
<h3>Performance Recommendations</h3>
<p>Performance overview provides a summary of your database performance and helps you with performance tuning and troubleshooting.</p>
<img src="/images/15845063695e71a601ea489.png" alt="Performance recommendation">
<p>(Per Microsoft documentation) the performance recommendation options available for single and pooled databases in Azure SQL Database are:</p>
<ul>
<li>Create index recommendations - Recommends the creation of indexes that may improve the performance of your workload.    </li>
<li>Drop index recommendations - Recommends removal of redundant and duplicate indexes daily, except for unique indexes, and indexes that were not used for a long time (&gt;90 days). Please note that this option is not compatible with applications using partition switching and index hints. Dropping unused indexes is not supported for Premium and Business Critical service tiers.   </li>
<li>Parameterize queries recommendations (preview) - Recommends forced parameterization in cases when you have one or more queries that are constantly being recompiled but end up with the same query execution plan.  </li>
<li>Fix schema issues recommendations (preview) - Recommendations for schema correction appear when the SQL Database service notices an anomaly in the number of scheme-related SQL errors that are happening on your SQL database. Microsoft is currently deprecating the &quot;Fix schema issue&quot; recommendations.</li>
</ul>
<h3>Automatic Tuning</h3>
<img src="/images/15845063775e71a60998da5.png" alt="Automatic Tuning">
<p>Azure SQL Database automatic tuning provides peak performance and stable workloads through continuous performance tuning based on AI and machine learning.
Automatic tuning is a fully managed intelligent performance service that uses built-in intelligence to continuously monitor queries executed on a database, and it automatically improves their performance. This is achieved by dynamically adapting the database to the changing workloads and applying tuning recommendations. Automatic tuning learns horizontally from all databases on Azure through AI and it dynamically improves its tuning actions. The longer a database runs with automatic tuning on, the better it performs.
Azure SQL Database automatic tuning might be one of the most important features that you can enable to provide stable and peak performing database workloads.</p>
<p>What can automatic tuning do for you:</p>
<ul>
<li>Automated performance tuning of Azure SQL databases</li>
<li>Automated verification of performance gains</li>
<li>Automated rollback and self-correction</li>
<li>Tuning history</li>
<li>Tuning action T-SQL scripts for manual deployments</li>
<li>Proactive workload performance monitoring</li>
<li>Scale-out capability on hundreds of thousands of databases</li>
<li>The positive impact to DevOps resources and the total cost of ownership</li>
</ul>]]></description>
<link>https://HarvestingClouds.com/post/azure-sql-basics-automatic-tuning-and-performance-recommendation</link>
<pubDate>Thu, 21 Nov 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure SQL Basics - Query Performance Insights</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-sql-and-data-factory-basics-index" target="_blank">Azure SQL and Data Factory Basics - Index</a></b></p>
<p>In this post, we will be discussing the Azure SQL Database query performance insights which provide intelligent query analysis for single and pooled databases. It helps identify the top resource consuming and long-running queries in your workload. This helps you find the queries to optimize to improve overall workload performance and efficiently use the resource that you are paying for.</p>
<p>To Navigate to the Query Performance Insight, navigate to All resources-&gt; SQL Databases-&gt; Select the database. Under the menu on the left hand side go to the Intelligent Performance category -&gt; select Query Performance Insights</p>
<ul>
<li>The panel describes the Deeper insight into your databases resource (DTU) consumption</li>
<li>Details on top database queries by CPU, duration, and execution count </li>
<li>The ability to drill down into details of a query, to view the query text and history of resource utilization</li>
<li>Annotations that show performance recommendations from database advisors</li>
</ul>
<img src="/images/15845060335e71a4b13cd66.png" alt="Query Performance Insights">
<p>Optionally, you can select the Custom tab to customize the view for:</p>
<ul>
<li>Metric (CPU, duration, execution count).</li>
<li>Time interval (last 24 hours, past week, or past month).</li>
<li>The number of queries.</li>
<li>Aggregation function.</li>
</ul>
<img src="/images/15845060425e71a4ba590a8.png" alt="Customizations">]]></description>
<link>https://HarvestingClouds.com/post/azure-sql-basics-query-performance-insights</link>
<pubDate>Sun, 17 Nov 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure SQL Basics - Advanced Data Security</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-sql-and-data-factory-basics-index" target="_blank">Azure SQL and Data Factory Basics - Index</a></b></p>
<p>In this post, we will be discussing the <strong>Advanced Data Security</strong> feature on the Azure SQL Servers. It is enabled at the server level and will be automatically enabled at the database level. It is a charged service. Once you enabled it, you will be charged per month for this service.
To access the advanced data security, go to all resources and select the server for which you want to enable the ADS to feature and go to the Security section, and click on advanced data security.</p>
<img src="/images/15845009925e7191001b780.png" alt="The ADS setting at the server level">
<p>Below are the settings for ADS:</p>
<ol>
<li>Select the ‘On’ to enable the ADS.</li>
<li>Select the subscription from the subscripton dropdown. You need to have the storage account for this. If you don’t have a storage account previously created, it will ask you to create the new one.</li>
<li>Provide the appropriate email address to get any activity notification. </li>
<li>You can select the type of protection from the list as highlighted in the screenshot.</li>
</ol>
<img src="/images/15845009995e7191079b3c6.png" alt="Details of the ADS">
<p>Different types of Advanced Thread Protection types include:</p>
<ol>
<li>SQL Injection – to protect you from such attacks</li>
<li>SQL injection vulnerability – to check if there is any such vulnerabilities in the database</li>
<li>Data exfiltration</li>
<li>Unsafe action</li>
<li>Brute Force</li>
<li>Anomalous client logins</li>
</ol>
<img src="/images/15845010055e71910d5beb7.png" alt="Advanced Threat Protection">
<p>You can also view and update the Advanced Data Security settings from the Azure SQL Database as shown below. </p>
<img src="/images/15845010125e719114a67d4.png" alt="ADS at the Database level">
<p>Although, as the settings are applied and billed at the server level, you get more control to view the settings from the server. The graphs and analytics of data security are shown at the Database level.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-sql-basics-advanced-data-security</link>
<pubDate>Fri, 15 Nov 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure SQL Basics - Exploring the new Query Editor in the Azure Portal</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-sql-and-data-factory-basics-index" target="_blank">Azure SQL and Data Factory Basics - Index</a></b></p>
<p>In this post, we will see how to use a query editor in the Azure portal. Query Editor helps us to query the tables, view and stored procedure on the selected SQL databases.</p>
<p>To navigate to the Query editor, go to the <em>All Resources-&gt; SQL database -&gt; select the database</em>.</p>
<p>On the left-hand side, you can select the &quot;query editor(preview)&quot; menu option. It will give you the option for providing the login credentials to connect to the Database within your SQL Server.</p>
<img src="/images/15844993115e718a6ff10a6.png" alt="Connecting to the Query Editor">
<p>Once you log in, you can view all the tables, views and stored procedure resides in the selected SQL database. In the query panel, you can write down the query and see the results immediately at the bottom of the screen.</p>
<img src="/images/15844993195e718a773845d.png" alt="Running Queries in the Query Editor">]]></description>
<link>https://HarvestingClouds.com/post/azure-sql-basics-exploring-the-new-query-editor-in-the-azure-portal</link>
<pubDate>Thu, 07 Nov 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure SQL Basics - Data Masking of sensitive information</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-sql-and-data-factory-basics-index" target="_blank">Azure SQL and Data Factory Basics - Index</a></b></p>
<p>In this post, we will be managing the security of data within Azure SQL Databases by implementing the data masking on data. Via this option, you can mask the sensitive information from your database like credit card details, email addresses, etc. </p>
<p>Select the Azure SQL Database on which you want to implement the data masking. As shown below, it is under the security section. Once you click on it, it will open the blade on which you can set up the mask for different schema under your database. </p>
<img src="/images/15844989265e7188eeb5982.png" alt="Data masking option">
<p>SQL Users excluded from masking (administrators are always excluded): This option means the data will be visible without any masking to the admin users. You can add other users to whom you want to make all the data visible.</p>
<p>Click ‘<strong>Add Mask</strong>’, Select the schema, table, and column on which you want to do masking.</p>
<img src="/images/15844989345e7188f6aec3d.png" alt="Setting up new Data masking">
<p>We have different Masking field formats. These include, but not limited to Email addresses, Credit card information, etc.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-sql-basics-data-masking-of-sensitive-information</link>
<pubDate>Sun, 03 Nov 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure SQL Basics - Enabling Geo-Replication for Disaster Recovery</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-sql-and-data-factory-basics-index" target="_blank">Azure SQL and Data Factory Basics - Index</a></b></p>
<p>In this post, we will be checking how to enable Geo-replication for the Azure SQL database. Through this feature, you set up active replication to a different Geo location. This enables you to recover your database in case there is a disaster level event in your primary region. </p>
<p>To begin, select the Azure SQL database for which you want to enable the geo-replication as shown below.</p>
<img src="/images/15844983315e71869b77f9a.png" alt="Your Azure SQL Database">
<p>Once you click on the Azure SQL database, it will open up the properties of that SQL database. Click on the geo-replication as highlighted in the screenshot and it will open up the geo-replication window. #1 shows you the primary region we selected while creating the SQL database. #2 will provide the list of regions you can select for the replication. Once you select the region, the new blade will open to set up the secondary region for the SQL database.</p>
<img src="/images/15844983395e7186a328a36.png" alt="Geo-Replication Option">
<p>Create the Target server and select the pricing tier for the new region. </p>
<img src="/images/15844983455e7186a9653f2.png" alt="Detailed Settings during Geo-Replication setup">
<p>It will take some time to set up the resources, but once it's done, you can switch between the primary and the secondary region in case of failover.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-sql-basics-enabling-geo-replication-for-disaster-recovery</link>
<pubDate>Tue, 29 Oct 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure SQL Basics - Connecting to Azure SQL DB using SSMS</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-sql-and-data-factory-basics-index" target="_blank">Azure SQL and Data Factory Basics - Index</a></b></p>
<h3>Connecting Azure SQL DB using SSMS</h3>
<p>In this post, we will be showing how to connect SSMS (SQL Server Management Studio) to Azure SQL databases. Launch SSMS and the &quot;Connect to Server&quot; popup should open up automatically. Azure doesn’t support Windows Authentication. So you will need to log in using SQL server authentication with the username and password created in the Azure portal for the server or via AAD options. </p>
<img src="/images/15844873805e715bd4012ad.png" alt="SSMS - Connect to SQL Server">
<p>Before login, you will need the connectivity details and to get that follow the below steps.</p>
<ol>
<li>Get the server details to which you will connect. Make sure you get the server name instead of the database name. Sometimes people get confused. The server will have the naming convention as servername.Database.windows.net </li>
<li>Set up the server firewall in the Azure portal. Otherwise, you will not be able to connect with the server. It will throw an error.</li>
</ol>
<p>To set up the Server Firewall: Go to all resources and select SQL Databases, it will show up all the databases you have created in your subscription.</p>
<img src="/images/15844873975e715be5da8bd.png" alt="Finding Azure SQL Databases">
<p>Select the database to which you want the connectivity. Let’s take an example of the selected database below. Once you click on your database (test01 in the example below), it will open up the blade for that specific database. Go to the &quot;<strong>Set server Firewall</strong>&quot; option as selected.</p>
<img src="/images/15844874265e715c02ef51e.png" alt="Option to Set Firewall">
<p>Here you can add the IP ranges and/or specific IP addresses from where you want the access allowed to this Database. In the below example, we are adding the <strong>Client IP</strong> (i.e. the IP address of the client from where you are accessing the Azure portal. </p>
<img src="/images/15844874355e715c0b484f0.png" alt="Adding the Client IP for connectivity in the Firewall">
<p>By default, there will be no IP added under the Client IP address section. Once you click on step 1, then it will populate the IP address of your machine. In the large organization, you can set up the start and end IP.</p>
<p>Note that there is a radio button to allow Azure services and resources to access this server, this will ensure the azure services can monitor your database performance to do analytics for the diagnostic operations.
Once you click Save, you should be able to connect the Azure SQL databases using SSMS.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-sql-basics-connecting-to-azure-sql-db-using-ssms</link>
<pubDate>Fri, 25 Oct 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure SQL Basics - Creating Azure SQL Databases</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-sql-and-data-factory-basics-index" target="_blank">Azure SQL and Data Factory Basics - Index</a></b></p>
<p>In my previous blog, I showed you how to search for Azure SQL databases. In this post we will see how to create the databases. To create the Azure SQL database, you need to know the subscription under you want to create the Azure SQL database.</p>
<p>Below is the first screen to start with the creation of the SQL database. Here you do the following:</p>
<ol>
<li>Select the <strong>Subscription</strong> under which you want to create the SQL database.</li>
<li>You can select the <strong>Resource group</strong> from the dropdown of already created resource groups or you can create a new one as highlighted below. The next section defines the properties of the new SQL database.</li>
<li>Give the appropriate <strong>name</strong> of the SQL database</li>
<li>Select the already existing <strong>server</strong> under which you want to create the SQL database or you can create the new server.</li>
<li>Select ‘No’ under the SQL Elastic pool option. Select ‘Yes’ if you want your resources to be shared between multiple databases.</li>
<li>The next one is <strong>compute and storage</strong>, Azure automatically selected the basic one for me. In case you want more powerful resources, you can change this configuration by clicking on the Configure database link.</li>
</ol>
<img src="/images/15844868925e7159ec6ce2e.png" alt="Basic settings">
<p>The next screen is for Networking. Leave be the default setting as shown in the below screenshot.</p>
<img src="/images/15844869015e7159f531b66.png" alt="Networking settings">
<p>Next Screen is <strong>Additional settings</strong>. This screen is used to set the Collation of the SQL database, data source, and advanced data security.
Data source by default is None.  If you want to import the backup file of some existing database then select the Backup. If you select the ‘Sample’ option then AdventureWorksLT will be created as the sample database.</p>
<p>Database Collation keeps the default one until unless you have another collation to select based upon different regions.</p>
<p><strong>Advanced Data Security</strong>: Azure charges extra if you select this option. For the time being, I am selecting as ‘Not now’. We will revisit this setting in detail in a later blog post.</p>
<img src="/images/15844869095e7159fd104cc.png" alt="Additional settings">
<p>The next screen is about Tags. Tags are used to categorize your resources. These can later be used in billing and automation etc.</p>
<img src="/images/15844869165e715a042d59e.png" alt="Tags">
<p>The final screen is ‘Review + Create’. You can review all the options selected in previous screens and it will provide you the estimated cost per month based upon your selection. If you are ok with it then click on the Create button at the bottom of the screen.</p>
<img src="/images/15844869475e715a234c628.png" alt="Review and Create">
<p>You can see the progress at the top right-hand side.</p>
<img src="/images/15844869545e715a2a01864.png" alt="Checking Deployment Status">
<p>Once the deployment is done, You can see the newly created databases under All services -&gt; SQL databases.</p>
<img src="/images/15844869605e715a30a059d.png" alt="Locating deployed Database">]]></description>
<link>https://HarvestingClouds.com/post/azure-sql-basics-creating-azure-sql-databases</link>
<pubDate>Mon, 21 Oct 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure SQL Basics - Finding Azure SQL Databases in the Azure Portal</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-sql-and-data-factory-basics-index" target="_blank">Azure SQL and Data Factory Basics - Index</a></b></p>
<p>Azure SQL Database is Microsoft's Platform as a Service (PaaS) offering for the Microsoft SQL Server Database. In the next few weeks, we will be going back to the basics and delving into this service from basic to intermediate. We will visit all it's nitty gritty's and understanding in detail what this service has to offer. </p>
<p>Let's start with where to find Azure SQL Databases with the Azure Portal. </p>
<p>Start by going to <a href="http://portal.azure.com">http://portal.azure.com</a> and clicking on the “All services” as shown below. </p>
<img src="/images/15844859395e7156334a480.png" alt="All Services">
<p>It will open up the featured services on the screen as shown below. You can select the SQL Databases from the featured services and if it's not there then you can search in the search box as highlighted in the image.</p>
<img src="/images/15844859455e715639e42c8.png" alt="SQL Databases category">
<p>The next screen will give the option to create SQL databases and you can see previously created SQL databases in the current subscription or selected subscription.</p>
<img src="/images/15844859515e71563fa504d.png" alt="Finding your database">
<p>That's all there is to it.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-sql-basics-finding-azure-sql-databases-in-the-azure-portal</link>
<pubDate>Sun, 20 Oct 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Networking - Azure - 05 Creating rules in NSGs and assigning NSGs</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>In the last post, we created a new Network Security Group or NSG. Now in this post, we will work with these NSGs and create some rules. We will also check how to assign these to a subnet (or network interface) in Azure. </p>
<p>There are two types of rules that you define in an NSG:</p>
<ol>
<li><strong>Inbound Rules</strong> - these rules dictate what incoming traffic will be allowed. </li>
<li><strong>Outbound Rules</strong> - these rules determine what outgoing traffic will be allowed.</li>
</ol>
<p>You define the source and destination for each rule along with the protocol and port information. For <strong>inbound</strong> traffic, the destination will be the associated subnet or network interface. Similarly, for the <strong>outbound</strong> traffic, the source will be the associate subnet or network interface. </p>
<p>The rules in an NSG has a priority number. Smaller the priority, the higher in the order it is and it will be executed first. When inbound or outbound traffic hits the NSG then the rules are evaluated in the order based on their priority. If a matching rule is found, then the subsequent rules are NOT evaluated. The traffic is either allowed or denied, as per what is defined in the matching rule.</p>
<p>Navigate to your Network Security Groups (NSGs) and check the overview screen. You will notice that there are default rules for both Inbound and Outbound rules. These allow connectivity between subnets within the same virtual network. You can update these as well, but the best practice is to overwrite these by writing another rule with higher priority. <strong>Note</strong> that a lower number means a higher priority.</p>
<img src="/images/15850320575e79ab79810f1.png" alt="NSG and Rules">
<h3>Adding Inbound and Outbound Rules</h3>
<p>You can add/update the Inbound and Outbound Rules from the settings. Since these two are very similar, we will only be looking at the Inbound security rules. These are more important as these dictate who can talk to the VM from the outside world( i.e. out of your virtual network).</p>
<p>Click on the &quot;Inbound security rules&quot; option under the Settings menu. Check all the default rules. To create a new rule, click on the &quot;<em>+Add</em>&quot; button.</p>
<img src="/images/15850320685e79ab844d47a.png" alt="Adding Inbound Security Rules">
<p>One Inbound rule consists of:</p>
<ol>
<li><strong>Source</strong> (IP address in CIDR format) and it's port ranges</li>
<li><strong>Destination</strong> (IP address in CIDR format) and it's port ranges</li>
<li><strong>Protocol</strong> for the communication</li>
<li><strong>Action</strong> - this is either <em>allow</em> or <em>deny</em> and based upon this select the traffic will be either allowed or dropped</li>
<li><strong>Priority</strong> - a number depicting the priority of a rule in NSG. Lower this number, higher the priority of a rule</li>
<li>Provide a <strong>name</strong> and <strong>description</strong> for the rule as well. </li>
</ol>
<img src="/images/15850320775e79ab8dd06c9.png" alt="Inbound rule">
<p>For defining Source or Destination you have various template options that you can choose from. These include:</p>
<ol>
<li>Any</li>
<li>IP Addresses</li>
<li>Virtual Network</li>
<li>Application security group</li>
</ol>
<p>That's it! Hit create and the NSG rule is ready.</p>
<img src="/images/15850332125e79affc65963.png" alt="Destination">
<h3>Assigning the NSGs</h3>
<p>These NSGs can be assigned at two levels: </p>
<ol>
<li><strong>Subnet level</strong> - At this level, all the inbound and outbound rules in the NSG are applied to all the Network Interfaces (i.e. VMs) inside the subnet.</li>
<li><strong>Network Interface level</strong> - At this level, the rules only apply to that particular network interface and other interfaces are not affected.</li>
</ol>
<p>Simply navigate to either one of those as per your requirements. In the below screenshot, we are at the subnet level. Click on the &quot;<strong><em>+Associate</em></strong>&quot; button. In the popup select the virtual network and the subnet in it, with which you want to link this NSG. Hit Save to save these configurations. </p>
<img src="/images/15850332185e79b002c509d.png" alt="Associating Subnets and Network interfaces">
<p>That's it! You are all set. Now you can control the traffic via NSGs and allow only the traffic that adheres to the standard at your organizations with utmost importance to security aspects.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-networking-azure-05-creating-rules-in-nsgs-and-assigning-nsgs</link>
<pubDate>Thu, 17 Oct 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Networking - Azure - 04 Creating Network Security Groups</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p><strong>Network Security Groups</strong> or <strong>NSGs</strong> are kind of a firewall, where you define what traffic will be allowed and what traffic will be blocked. </p>
<p><strong>Note</strong> that whereas AWS has Network ACLs at the subnet level and Security Groups at the EC2 instance level, Microsoft Azure only has NSGs. These can be applied both at the subnet level as well as at the network interface level (i.e. VM level). The way these are designed, they resemble Network ACLs a lot.</p>
<p>You start by navigating to all services -&gt; Networking category -&gt; Network security groups. </p>
<img src="/images/15850293135e79a0c109953.png" alt="Network Security Groups">
<p>All your existing NSGs are listed here. To create a new one click on the &quot;<em>+Add</em>&quot; button.</p>
<img src="/images/15850293205e79a0c86e92d.png" alt="Add new NSG">
<p>Provide the subscription and resource group for the NSG. Also, provide a name and the region where the NSG will be deployed. </p>
<img src="/images/15850293255e79a0cdbe430.png" alt="Basics">
<p>Provide the tagging information to better categorize the resources. </p>
<img src="/images/15850293495e79a0e52dcb5.png" alt="Tags">
<p>Review all the settings and hit create. </p>
<img src="/images/15850293555e79a0eb75cb7.png" alt="Review and Create">
<p>Now that the NSG is created for us we will see how we can define rules in this and assign it to subnets or network interfaces, in the next post. </p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-networking-azure-04-creating-network-security-groups</link>
<pubDate>Mon, 14 Oct 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Networking - Azure - 03 Service Endpoints</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p><strong>Service Endpoints</strong> in Azure Virtual Networks provide the ability to connect to various Microsoft public services (like Azure SQL and Azure storage) securely. From official documentation: &quot;The endpoints also extend the identity of your VNet to the Azure services over a direct connection. Endpoints allow you to secure your critical Azure service resources to only your virtual networks. Traffic from your VNet to the Azure service always remains on the Microsoft Azure backbone network.&quot;. </p>
<p>When you enable Service Endpoints, the IP address for a Microsoft service, switches from a public IP address to a private IP address. If you have any firewall rules in place then those will need to be updated. E.g. a VM connecting to a Microsoft Azure SQL database. </p>
<p><strong>Note</strong> that a service endpoint is always enabled on a <strong>Subnet</strong> level. </p>
<p>To begin, navigate to the Virtual Networks and select the virtual network you require. Under settings menu, click on the &quot;Service endpoints&quot;. To create a new service endpoint click on the &quot;+Add&quot; button.</p>
<img src="/images/15850264655e7995a16747c.png" alt="Service Endpoints">
<p>You are prompted to enter a service. You can pick from one of the service available. <em>Microsoft.Storage</em> and <em>Microsoft.Sql</em> are the most common use cases from the list. Microsoft keeps updating this list over a period of time.</p>
<img src="/images/15850264715e7995a732e35.png" alt="Selecting the Service">
<p>Once the service is selected, next select the subnet for which you want to enable the service endpoint. Hit Ok and you are done. It will take some time for the configurations to finish at the backend, but now you will have connectivity from your network to the service leveraging private IP addresses internally (instead of public IP addresses).</p>
<img src="/images/15850276965e799a70e94fe.png" alt="Linking the Service to the Subnet">
<p>For more information check this link: <a href="https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-service-endpoints-overview" target="_blank">Virtual Network service endpoints</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-networking-azure-03-service-endpoints</link>
<pubDate>Sun, 13 Oct 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Networking - Azure - 02 Working with Virtual Networks (vNets) and Subnets</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>In this post, we will start working with the <strong>Virtual Networks</strong> (or <strong>vNets</strong>) and <strong>Subnets</strong>. To start, navigate to all services and then go to the Networking category. Find the option for Virtual network (the very first option) and click on this. All your vNets will be listed here. Find the one on which you want to work and click on the same.</p>
<img src="/images/15850252995e7991134eb50.png" alt="Virtual Networks">
<p>Under the Settings category, you will find the option for &quot;<strong>Address space</strong>&quot;. This is where you can view the complete address space for your virtual network. Also, this is where you can expand your virtual network by adding more address spaces. </p>
<img src="/images/15850253055e799119629eb.png" alt="Address Spaces">
<p>Next up, you have:</p>
<ol>
<li><strong>Connected Devices</strong> - These are all the network interfaces (i.e. NIC cards on the virtual machines or VMs) that are connected to this virtual network. You can also filter through the data by name or even IP address. </li>
<li><strong>Subnets</strong> - All the subnets on the virtual network are listed here.</li>
<li><strong>Adding Subnets</strong> - You can add more subnets to the vNet by clicking on the &quot;+ Subnet&quot; button. E.g. you can have a subnet hosting an internet-facing application. And then you can have another subnet, hosting a database, that will only be accessible from the application.</li>
<li><strong>IPv4 available addresses</strong> - These are the number of IP addresses available in a subnet. This gives you an indicator of how many devices can connect to a subnet and when your subnet is about to fill up.</li>
</ol>
<img src="/images/15850253125e79912026f25.png" alt="Subnets">
<p>There is much more that can be done on a virtual network and we will keep on exploring the key areas that will be most useful to you. </p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-networking-azure-02-working-with-virtual-networks-vnets-and-subnets</link>
<pubDate>Sat, 12 Oct 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Networking - Azure - 01 Creating Virtual Network</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>Microsoft Azure's virtual network capabilities as part of the <strong>Infrastructure as a Service</strong> (<strong>IaaS</strong>) offering is called <strong>Virtual Network</strong> or unofficially known as <strong>vNet</strong>. It provides connectivity between various virtual machines (or VMs) in that network. It can also extend the connectivity to other networks. You can also extend the virtual networks to your on-premises networks. </p>
<p>In this post, we will be creating a new Virtual Network. To start creating the virtual networks, click on the 3 lines on the top left and then click on the &quot;<strong>+New</strong>&quot; button. Here click on the Virtual Network option under the Networking category.</p>
<img src="/images/15849562375e78834d45fca.png" alt="Virtual Networks">
<p>Alternatively, you can navigate to all services and then go to the Networking category. Find the option for Virtual network (the very first option) and click on this. All your vNets will be listed here (as shown below). You can click on the &quot;<strong>+Add</strong>&quot; button to create a new virtual network.</p>
<img src="/images/15849908735e790a997ac68.png" alt="Virtual Networks section">
<p>To start with, provide the subscription and the Resource Group details. Provide a name for the virtual network and also the region where you want this to be deployed.</p>
<img src="/images/15849562435e788353301bf.png" alt="Create wizard">
<p>The second screen is the most important in the wizard. This is where you define the address spaces for your vNet in CIDR notation. You also define your subnets here. Every virtual network should have at least one subnet. </p>
<img src="/images/15849562485e788358d945e.png" alt="IP Addresses and subnets">
<p>In the next screen for security, you can enable the DDoS protection (i.e. protection from the DDoS attacks on your network) and Firewall (i.e. Azure Firewall).</p>
<img src="/images/15849576845e7888f479ae4.png" alt="Security">
<p>Next, provide the tags to categorize your network resource.</p>
<img src="/images/15849577115e78890fe07f8.png" alt="Tags">
<p>Finally, review all the settings and create the  virtual network.</p>
<img src="/images/15849577175e78891584f58.png" alt="Review and Create">
<p>Now that the virtual network is created for us, in the next few posts, we will start working with this and perform more tweaks and configurations.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-networking-azure-01-creating-virtual-network</link>
<pubDate>Thu, 10 Oct 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Networking - AWS - 06 Network ACLs vs Security Groups</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p><strong>Network ACLs</strong> (or Access Control Lists) and <strong>Security Groups</strong> are very similar in their functionality. Both are acting as a firewall and restricting the traffic. There are few key differences between the two that we need to understand so that we can create and apply this appropriately to any situation at hand. </p>
<h3>Scope</h3>
<p><strong>Network ACLs</strong> are applied at the <strong>Subnet</strong> level. These restrict the traffic, coming in or out of the subnet. ACLs are therefore automatically applied to all resources (e.g. EC2 instances) in the subnet. </p>
<p>Whereas the <strong>Security Groups</strong> are applied at the <strong>EC2 instances</strong> level. </p>
<p>Network ACLs act as a secondary layer of defense. Also, if anyone forgets to apply Security Groups at the EC2 instances then Network ACLs assist in setting up standards in terms of allowed and blocked traffic in your environment. </p>
<img src="/images/15849551955e787f3b35cf1.png" alt="Traffic Diagram with Network ACLs and Security Groups">
<p>(Image source: <a href="https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Security.html">https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Security.html</a>)</p>
<h3>Rule Types</h3>
<p>In a Security Group, all the traffic is automatically denied. You can only define &quot;Allow&quot; rules. And the traffic is allowed as per these rules. </p>
<p>Whereas in Network ACLs you decide and configure for each rule if the traffic is Allowed or Denied</p>
<h3>Rules Evaluation</h3>
<p>In a Security Group, all rules are evaluated, regardless of their order, to determine what traffic is allowed. As all traffic is denied by default and you can only specify allow rules (with no priority or rule number), all the rules need to be evaluated to determine if the traffic is allowed or not. </p>
<p>In contrast, in Network ACLs, the rules are evaluated in order. If a matching rule is found for the traffic, then subsequent rules are NOT evaluated and the traffic is either Allowed or Denied as mentioned in the first matching rule.</p>
<h3>Statefulness</h3>
<p>A Security Group is <strong>Stateful</strong>. Return traffic is automatically allowed, regardless of any rules. In contrast, the Network ACLs are <strong>Stateless</strong>. Return traffic must be explicitly allowed by the rules.</p>
<p>For more information please check the official information here: <a href="https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Security.html" target="_blank">Internetwork Traffic Privacy in Amazon VPC</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-networking-aws-06-network-acls-vs-security-groups</link>
<pubDate>Mon, 07 Oct 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Networking - AWS - 05 Security Groups</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p><strong>Security Groups</strong> are another security feature in AWS Virtual Private Clouds (VPCs). It is like a firewall that restricts the traffic at the EC2 instance level. These are applied to the EC2 instances. Once the traffic is filtered by the Network ACLs at the subnet level, it is then filtered further by the security group.</p>
<p><strong>NOTE</strong>: All the traffic is automatically denied in a Security Group. Only the traffic defined in a security group is allowed to EC2 instances.</p>
<p>To access the security groups, navigate to the VPC section in AWS and then click on the &quot;<strong>Security Groups</strong>&quot; option in the Security menu on the left. You can view all your security groups here. Select one and view its properties in the bottom panel. These properties include Description, Inbound rules, Outbound rules, and Tags. You can either right-click on the security group or click on the Actions button to view the action that you can take on this security group. The key actions we are interested in includes:</p>
<ul>
<li>Edit Inbound rules</li>
<li>Edit Outbound rules</li>
</ul>
<img src="/images/15849519495e78728d61791.png" alt="Security Groups">
<p>&quot;<strong>Edit Inbound rules</strong>&quot; option is to edit the inbound rules. These govern what traffic will be allowed coming into the EC2 instances. Each rule contains:</p>
<ol>
<li><strong>Type</strong> - this determines the protocol or type of traffic. You have loads of pre-defined types, or you can add a custom option from the list</li>
<li><strong>Protocol</strong> - this is determined by the type</li>
<li><strong>Port Range</strong> - this is also dependent on the type selected. For pre-defined types, this is auto-populated. For custom type, you can define a custom port range.</li>
<li><strong>Source</strong> - this is the place from where the communication will originate (that is coming into the associated subnet). You can have source as Anywhere or can define a custom IP address range via a CIDR</li>
<li><strong>Description</strong> - description of the rule.</li>
</ol>
<img src="/images/15849519555e787293d9950.png" alt="Edit Inbound rules">
<p>&quot;<strong>Edit Outbound rules</strong>&quot; option is to edit the outbound rules. These govern what traffic will be allowed coming out of the EC2 instance. These rules are very similar to the inbound rules. The only difference is that now the traffic is originating from the EC2 instance and is going outbound. Hence we need to specify a <strong>Destination</strong> (instead of a Source) while defining the outbound rules. </p>
<img src="/images/15849519615e7872992c735.png" alt="Edit Outbound rules">
<p><strong>Please Note</strong> that just like for Network ACLs, in a Security Group the address &quot;0.0.0.0/0&quot; means all IP addresses. This can be used for Source in inbound rules or Destination for outbound rules if the exact IP address or IP address range is unknown. Although this should be used with caution. As a best practice, this should be avoided at all costs and should be replaced with as small IP address ranges as possible.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-networking-aws-05-security-groups</link>
<pubDate>Sun, 06 Oct 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Networking - AWS - 04 Network ACLs</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p><strong>Network ACLs</strong> or <strong>Access Control Lists</strong> is like a firewall for your <strong>subnets</strong>.  There are inbound and outbound rules defined in these ACLs that determine what traffic is allowed in a subnet and what traffic will be blocked. These are great for the security of your network. </p>
<p>To access the Network ACLs, navigate to the VPC section in AWS and then click on the &quot;Network ACLs&quot; option in the Security menu on the left. </p>
<p>You can view all your ACLs here. Select one and view its properties in the bottom panel. These properties include Details, Inbound rules, Outbound rules, Subnet associations and Tags. You can either right-click on the ACL or click on the Actions button to view the action that you can take on this ACL. The key actions we are interested in includes:</p>
<ul>
<li>Edit Subnet associations</li>
<li>Edit Inbound rules</li>
<li>Edit Outbound rules</li>
</ul>
<img src="/images/15849508045e786e142efd4.png" alt="Network ACLs">
<p>&quot;<strong>Edit Subnet associations</strong>&quot; option is to edit the linking of ACL to a subnet. You can associate an ACL to one or multiple subnets.</p>
<img src="/images/15849508145e786e1e101b4.png" alt="Edit Subnet associations">
<p>&quot;<strong>Edit Inbound rules</strong>&quot; option is to edit the inbound rules. These govern what traffic will be allowed coming into the subnet. Each rule contains:</p>
<ol>
<li><strong>Rule number</strong> or <strong>Rule #</strong> - this is the order in which the rules are executed. </li>
<li><strong>Type</strong> - this determines the protocol or type of traffic. You have loads of pre-defined types, or you can add a custom option from the list</li>
<li><strong>Protocol</strong> - this is determined by the type</li>
<li><strong>Port Range</strong> - this is also dependent on the type selected. For pre-defined types, this is auto-populated. For custom type, you can define a custom port range.</li>
<li><strong>Source</strong> - this is the place from where the communication will originate (that is coming into the associated subnet)</li>
<li><strong>Allow/Deny</strong> - this determines if the inbound communication defined by your rule will be allowed or denied.</li>
</ol>
<img src="/images/15849508205e786e248864e.png" alt="Edit Inbound rules">
<p>&quot;<strong>Edit Outbound rules</strong>&quot; option is to edit the outbound rules. These govern what traffic will be allowed coming out of the subnet. These rules are very similar to the inbound rules. The only difference is that now the traffic is originating from within this subnet and is going outbound. Hence we need to specify a <strong>Destination</strong> (instead of a Source) while defining the outbound rules. </p>
<img src="/images/15849508265e786e2a5f74b.png" alt="Edit Outbound rules">
<p><strong>Please Note</strong> that the address &quot;0.0.0.0/0&quot; means all IP addresses. This can be used for Source in inbound rules or Destination for outbound rules, if the exact IP address or IP address range is unknown. Although this should be used with caution. As a best practice, this should be avoided at all costs and should be replaced with as small IP address ranges as possible.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-networking-aws-04-network-acls</link>
<pubDate>Fri, 04 Oct 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Introducing Azure VMware Solutions</title>
<description><![CDATA[<p>Microsoft and VMWare have been competitors in the private cloud space. But in the public cloud space, this is no less than revolutionary to see VMWare VMs being supported on the Microsoft Azure platform. These solutions help customers to extend (and maybe move) their on-premises VMWare environments to Azure.</p>
<p>When you go to create a new resource, search for VMWare and you will see the following 3 solutions:</p>
<ul>
<li><strong>VMware Solution by CloudSimple - Service</strong> - enables you to consume dedicated VMware Cloud environments on Azure. </li>
<li><strong>VMware Solution by CloudSimple - Node</strong> - dedicated, isolated Azure bare metal infrastructure to create native VMware Cloud environments on Azure.</li>
<li><strong>VMware Solution by CloudSimple - Virtual Machine</strong> - unified, consistent management of Virtual Machines deployed on your VMware Cloud.</li>
</ul>
<img src="/images/15851036775e7ac33d5a1ec.png" alt="VMWare Solutions">
<p>For more information please check this link: <a href="https://azure.microsoft.com/en-us/overview/azure-vmware/" target="_blank">Azure VMware Solutions
</a></p>]]></description>
<link>https://HarvestingClouds.com/post/introducing-azure-vmware-solutions</link>
<pubDate>Wed, 02 Oct 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Networking - AWS - 03 Route Tables</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>A <strong>Route Table</strong> is just as the name suggests, it is a table consisting of the routes. This determines how the traffic is routed between the subnets and even out of the virtual network. Each entry in this table is called a &quot;<strong>route</strong>&quot;. When you create a VPC, one main route table is automatically created for your VPC and is associated with the subnet of the VPC. You can create custom route tables. E.g. you may want to send all traffic to a network virtual appliance i.e. a firewall. You may have Palo Alto or CheckPoint or any other firewall in your environment and you want the traffic to route via that firewall. You can achieve this via route tables.</p>
<p>To start, navigate to your VPC section in AWS. Then from the left menu, click on the &quot;<strong>Route Tables</strong>&quot; option. You can create a new table here as well by clicking on the &quot;Create route table&quot; button. For now, we will work with an existing route table on our VPC. Select the route table that you want to view the details or edit. </p>
<p>At the bottom panel, you will be able to view the following settings for the selected route table:</p>
<ul>
<li>Summary</li>
<li>Routes</li>
<li>Subnet Associations</li>
<li>Edge Associations</li>
<li>Route Propagation</li>
<li>Tags</li>
</ul>
<p>You can also either right-click on your route table or click on the Actions button to view the options related to this route table. Two options that we are interested in are:</p>
<ul>
<li>Edit subnet associations</li>
<li>Edit routes</li>
</ul>
<img src="/images/15849309795e7820a34d1b6.png" alt="Route Tables">
<p>Under the &quot;<em>Edit subnet associations</em>&quot; section, you can view the currently associated subnets. You can also link other subnets to the selected route table. </p>
<img src="/images/15849309875e7820ab3d05f.png" alt="Edit subnet associations">
<p>Under the Edit routes option, you can edit the current routes and can also add new routes to the route table. A single route simply defines a Destination, Target, Status and Propagated. The destination is where the traffic (that originated in your subnet) is headed. A destination of &quot;<strong>0.0.0.0/0</strong>&quot; means all traffic. Note that all zeros i.e. all traffic-related route should be the last one. These routes are evaluated in order. </p>
<p>If the traffic is destined for an IP address that is defined by 1st rule, then it will be sent to the device defined by the Target value of the route. The value &quot;<em>local</em>&quot; for the Target means to allow the traffic and keep it local, instead of sending anywhere else. You can also send the traffic to virtual appliances and gateways here.</p>
<img src="/images/15849309945e7820b27526d.png" alt="Editing routes">
<p>Routes tables are very powerful in determining how the traffic will flow in your environment and should be designed appropriately and tested thoroughly. </p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-networking-aws-03-route-tables</link>
<pubDate>Wed, 02 Oct 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Networking - AWS - 02 Working with VPCs and Subnets</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>Once we have a <strong>Virtual Private Cloud or VPC in AWS</strong>, we can now start getting familiar with the AWS console options and how to work with VPC and the subnets in it. </p>
<p>We will first need to navigate to VPC from All services and then the &quot;Network &amp; Content Delivery&quot; category. </p>
<img src="/images/15849298305e781c26efab5.png" alt="VPC menu">
<p>From the Dashboard, you can navigate to the &quot;<strong>Your VPCs</strong>&quot; option. Next, you select your VPC from the list of all VPCs. As a best practice, you should provide names for all the VPCs. As you can see from the screenshot below, the first VPC has a name but the second one does not. To provide a name you can simply hover over that VPC in the name column and it will give you an option to edit and provide a new name.</p>
<p>Once you have selected your VPC, you can view it's Description, CIDR blocks, Flow logs and Tags related details at the bottom part of the screen. </p>
<p>Next, you have your VPC selected, either right click on your VPC or click on the Actions menu at the top to view list of actions that you can perform on your VPC. One of this action is very useful. This is to &quot;<strong>Edit CIDRs</strong>&quot;.</p>
<img src="/images/15849298395e781c2f9d2e3.png" alt="VPC Details">
<p>While editing CIDRs, you can add a new CIDR by clicking on the &quot;Add IPv4 CIDR&quot; button. You will do this (for example) if the number of available IP addresses are enough and for increasing demand, you need to expand your VPC. </p>
<img src="/images/15849298465e781c36d1e08.png" alt="Editing CIDRs">
<p>For every VPC you have multiple Subnets under that VPC. You can view these under the Subnets menu from the left. As a best practice, here too you should have a name for each of the subnets. As you can see from the below screenshot, only one subnet has a name. When you select a subnet you can view it's details in the bottom part of the screen. These details include:</p>
<ul>
<li>Description</li>
<li>Flow Logs</li>
<li>Route Table</li>
<li>Network ACL</li>
<li>Tags</li>
<li>Sharing</li>
</ul>
<p>You can also right-click on the selected subnet or click on the Actions button at the top to view the list of actions that you can perform on this subnet. Most important ones are for:</p>
<ul>
<li>Edit network ACL association</li>
<li>Edit route table association </li>
</ul>
<img src="/images/15849298545e781c3e2a3ec.png" alt="Subnets">
<p>We will look at other services related to VPCs and Subnets in the next few posts. </p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-networking-aws-02-working-with-vpcs-and-subnets</link>
<pubDate>Tue, 01 Oct 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Networking - AWS - 01 Creating Virtual Private Cloud (VPC)</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p><strong>Virtual Private Cloud</strong> or <strong>VPC</strong> is Amazon's virtual network capabilities as part of the Infrastructure as a Service (IaaS) offerings. It provides connectivity between various EC2 instances in that network. It can also extend the connectivity to other networks or even on-premises. </p>
<h3>Accessing VPCs and Launching Create Wizard</h3>
<p>Let's start by creating one in AWS. We will first need to navigate to VPC from All services and then the &quot;Network &amp; Content Delivery&quot; category. </p>
<img src="/images/15849240565e780598df108.png" alt="VPC Option">
<p>Here in the Dashboard (or even the VPC tab), click on the &quot;<strong>Launch VPC Wizard</strong>&quot; button to launch the wizard to create a new VPC.</p>
<img src="/images/15849240665e7805a246c69.png" alt="Creating new VPC">
<h3>Different Pre-packaged Networking Scenarios</h3>
<p>When the wizard launches to create a VPC, there are 4 pre-packaged scenarios to choose from. <strong>Note</strong> that these scenarios can be built by performing the settings manually and building VPCs separately. These are just like the starter packs (or pre-baked templates) to get you started. </p>
<p>The first one is to build a VPC with a Single Public Subnet. Here is the official description: &quot;Your instances run in a private, isolated section of the AWS cloud with direct access to the Internet. Network access control lists and security groups can be used to provide strict control over inbound and outbound network traffic to your instances.&quot;</p>
<p>When you select this option it creates:
A /16 network with a /24 subnet. Public subnet instances use Elastic IPs or Public IPs to access the Internet.</p>
<img src="/images/15849240735e7805a95187d.png" alt="VPC with a Single Public subnet">
<p>The next one is to build a VPC with Public and Private subnets. Here is the official description: &quot;In addition to containing a public subnet, this configuration adds a private subnet whose instances are not addressable from the Internet. Instances in the private subnet can establish outbound connections to the Internet via the public subnet using Network Address Translation (NAT).&quot;</p>
<p>When you select this option it creates:
A /16 network with two /24 subnets. Public subnet instances use Elastic IPs to access the Internet. Private subnet instances access the Internet via Network Address Translation (NAT). (Hourly charges for NAT devices apply.)</p>
<img src="/images/15849240805e7805b0a307c.png" alt="VPC with Public and Private subnets">
<p>The 3rd one is to build a VPC with Public and Private subnets and Hardware VPN access. Here is the official description: &quot;This configuration adds an IPsec Virtual Private Network (VPN) connection between your Amazon VPC and your data center - effectively extending your data center to the cloud while also providing direct access to the Internet for public subnet instances in your Amazon VPC.&quot;</p>
<p>When you select this option it creates:
A /16 network with two /24 subnets. One subnet is directly connected to the Internet while the other subnet is connected to your corporate network via IPsec VPN tunnel. (VPN charges apply.)</p>
<img src="/images/15849241235e7805dba8a4f.png" alt="VPC with Public and Private subnets and Hardware VPN access">
<p>The last one is to build a VPC with a Private subnet only and Hardware VPN access. Here is the official description: &quot;Your instances run in a private, isolated section of the AWS cloud with a private subnet whose instances are not addressable from the Internet. You can connect this private subnet to your corporate data center via an IPsec Virtual Private Network (VPN) tunnel.&quot;</p>
<p>When you select this option it creates:
A /16 network with a /24 subnet and provisions an IPsec VPN tunnel between your Amazon VPC and your corporate network. (VPN charges apply.)</p>
<img src="/images/15849241315e7805e346e2b.png" alt="VPC with a Private subnet only and Hardware VPN access">
<p>For this post, I have selected the first scenario and clicked Next.</p>
<h3>Providing Key Details for VPC</h3>
<p>In the next screen, you provide all the essential details to create your VPC along with the dependent resources like it's subnets etc. These settings are as follows (numbering follows the numbers in the screenshot below):</p>
<ol>
<li>IPv4 address space for the whole VPC. This is provided in the CIDR notation. </li>
<li>Once you enter the IP address space for the VPC, this area shows you the number of available IP addresses. These are the maximum number of network interfaces that can join this virtual network (or EC2 instances that have these network interfaces). Note that this number is a little less than total number of IP addresses in the CIDR notation. This is because some IP addresses are reserved by AWS for it's own functionality.</li>
<li>A name for the VPC to identify it</li>
<li>Subnet's IPv4 CIDR address space. This should be contained inside the CIDR for the VPC and should be smaller than that. A subnet is a sub-part of the VPC and therefore will have lesser number of IP addresses than the complete VPC.</li>
<li>A name for the subnet</li>
<li>Service Endpoint, that allows your VPC to connect to a service like S3. You can set this up at a later point if you need.</li>
<li>Enable DNS hostnames to allow DNS hostnames for the instances that can be connected using these hostnames instead of just IP addresses. </li>
</ol>
<img src="/images/15849241385e7805ea3634b.png" alt="Create VPC - all-important details">
<p>Once you are done configuring, hit create to create the VPC. Now that we have a VPC we will start working with it next.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-networking-aws-01-creating-virtual-private-cloud-vpc</link>
<pubDate>Mon, 30 Sep 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Networking - Azure Virtual Networks vs AWS Virtual Private Cloud (VPC)</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>Microsoft Azure and Amazon's AWS cloud platforms provide networking as part of the <strong>Infrastructure as a Service</strong> (<strong>IaaS</strong>). Both provide similar services with some differences in the implementation of these services. Microsoft has named it's service as &quot;<strong>Virtual Networks</strong>&quot;, whereas Amazon calls its service as &quot;<strong>Virtual Private Cloud</strong>&quot; or most commonly called as &quot;<strong>VPC</strong>&quot;.</p>
<p>Both Microsoft Azure Virtual Networks and AWS VPC have &quot;<strong>Subnets</strong>&quot; inside a network. The VMs in Azure or EC2 instances in AWS has network interfaces that connects to the subnets within the Virtual Networks (or VPCs) and receive an IP address from that subnet. The traffic route inside these networks is governed by &quot;<strong>Route Tables</strong>&quot;. </p>
<p>Traffic restrictions are implemented slightly differently in the two platforms. Microsoft Azure has &quot;<strong>Network Security Groups</strong>&quot;. These can be applied at either Subnet or Virtual Machine level. Whereas AWS has <strong>Network Access Control Lists</strong> or <strong>ACLs</strong> that are applied at the subnet level. Also, AWS has <strong>Security Groups</strong> that are applied to the EC2 instances. </p>
<h3>Accessing Microsoft Azure Virtual Networks</h3>
<p>To view Virtual Networks in Azure, navigate to All services and then &quot;Virtual Networks&quot; under the Networking category.</p>
<img src="/images/15849220195e77fda35e356.png" alt="Virtual Networks">
<h3>Accessing AWS Virtual Private Clouds or VPCs</h3>
<p>To view the VPCs, navigate to All Services and then click on VPC under the Networking category.</p>
<img src="/images/15849219905e77fd86c5057.png" alt="VPCs">
<p>We will be looking at how to work with these services in detail in the next few posts. </p>
<p>For more information, click on the below official links:
<a href="https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-overview" target="_blank">Azure Virtual Network</a>
<a href="https://aws.amazon.com/vpc/" target="_blank">Amazon Virtual Private Cloud</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-networking-azure-virtual-networks-vs-aws-virtual-private-cloud-vpc</link>
<pubDate>Sun, 29 Sep 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Storage - AWS - 07 Setting up Access Points on S3 bucket</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p><strong>Access points</strong> can be used to provide access to your bucket securely from a network (VPC) or from the internet. To access bucket resources from a VPC access point, you’ll need to use the AWS CLI, AWS SDK, or Amazon S3 REST API.</p>
<p>To create a new access point, navigate to your S3 bucket and navigate to the &quot;<strong>Access points</strong>&quot; tab. Click on the &quot;<em>+Create access point</em>&quot; button.</p>
<img src="/images/15848383465e76b6ca1f068.png" alt="Access points">
<p>Provide a name for the access point. Select if you want to create this access point for a virtual network (VPC) or the internet. If you select the former then provide an ID for the VPC.</p>
<img src="/images/15848386815e76b819429c0.png" alt="Creating new Access Point">
<p>That's all there is to it. Now you are ready to access your S3 bucket securely from a VPC.</p>
<p>For more information click here: <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/creating-access-points.html#access-points-vpc" target="_blank">Creating Access Points Restricted to a Virtual Private Cloud</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-storage-aws-07-setting-up-access-points-on-s3-bucket</link>
<pubDate>Sat, 28 Sep 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Storage - AWS - 06 Configuring replication on S3 buckets</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>For any <strong>S3 bucket</strong> with critical objects, you should always set up the data to replicate to a different bucket in a different region. You can do both cross-region replication or same-region replication using the replication rules.</p>
<p>To begin, navigate to your S3 bucket and click on the &quot;<strong>Management</strong>&quot; tab. Then click on the &quot;<strong>Replication</strong>&quot; settings. Click on the &quot;<strong>+ Add rule</strong>&quot; to create a new rule.</p>
<img src="/images/15848372115e76b25b0c547.png" alt="Replication">
<p><strong>Pre-requisite</strong>: Versioning is a pre-requisite on the S3 bucket for you to be able to set up replication. If you haven't done so the wizard will give you an option to enable this setting here.</p>
<img src="/images/15848372175e76b2614a71f.png" alt="Versioning Requirements">
<p>Next, while setting sources, you can either set to replicate all the objects in the bucket or filter objects to be replicated based on the prefix or tags that they have.</p>
<img src="/images/15848372235e76b2671f5a2.png" alt="Set source">
<p>Next, you have to select a destination bucket. This can be in the same region or a different region. This can even reside in a different account altogether.</p>
<img src="/images/15848372285e76b26c954f8.png" alt="Destination bucket">
<p>If you don't have any bucket in the destination then you can create one right here while setting the destination.</p>
<img src="/images/15848372635e76b28fbd692.png" alt="Creating new bucket in the destination">
<p>While creating a new destination bucket, you can set its options like Storage class, object ownership, replication time control, etc.</p>
<img src="/images/15848372705e76b29622e78.png" alt="Destination options">
<p>Next up we have IAM role using which the data will be replicated. This should have appropriate access in both source and destination.</p>
<img src="/images/15848372755e76b29bdefac.png" alt="IAM rule">
<p>Finally, you review and create the replication rule.</p>
<img src="/images/15848372815e76b2a192ad5.png" alt="Review and create">
<p>Once the rule is set up the data will start getting copied over and protected.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-storage-aws-06-configuring-replication-on-s3-buckets</link>
<pubDate>Wed, 25 Sep 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Storage - AWS - 05 Adding Lifecycle Rules for S3 buckets</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>When you have object i.e. files in your <strong>S3 bucket</strong>, you are essentially paying for the space that these objects take up. After a certain period of time, you will either no longer need these objects or you may want to archive these. If we keep them lying and take actions manually then there are chances that we forget about these and end up paying for the extra space. <strong>Lifecycle</strong> within S3 buckets is a way to set an expiration date to the files and take automated actions at the expiration. </p>
<p>To start you navigate to your S3 bucket and go to the <strong>Management</strong> tab. Click on the Lifecycle here. To create a new lifecycle rule click on the &quot;<em>+Add lifecycle rule</em>&quot; button. A new popup launches to set up a new rule. You can have as many rules as you want.</p>
<img src="/images/15848362045e76ae6c6e798.png" alt="Add lifecycle rules">
<p>To set up a lifecycle rule, set up a name for this rule. Also, set up a scope for this rule. You can either apply this rule to all objects in the S3 bucket or reduce the scope by using prefixes for the names or leveraging tags.</p>
<img src="/images/15848362145e76ae76dade1.png" alt="Name and scope">
<p><strong>Transitions</strong> is the setting for moving the storage objects from one storage class to another. In the example below, the transition is set from Standard storage to Standard-IA after 90 days. You can set more transitions here. E.g. you can move the data to Glacier storage after 180 days. </p>
<img src="/images/15848362205e76ae7c37f14.png" alt="Transitions">
<p>Under <strong>Expiration</strong> settings, you can set the objects to delete after a certain period of time.</p>
<img src="/images/15848362265e76ae8205327.png" alt="Expiration">
<p>Finally, review the settings and create the lifecycle rules.</p>
<img src="/images/15848362315e76ae8734695.png" alt="Review and create">
<p>Setting up lifecycle rules is a great way to optimize costs in the longer run. It is a best practice to set up a policy at your organization and adhere to that for every bucket. It may seem like an extra but can help save costs big time.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-storage-aws-05-adding-lifecycle-rules-for-s3-buckets</link>
<pubDate>Sat, 21 Sep 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Storage - AWS - 04 Setting Permissions on S3 Bucket</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>You can manage the access to the <strong>S3 bucket</strong> at a much granular level through its permission settings. You can access its settings by navigating to the S3 bucket and clicking on the &quot;<strong>Permissions</strong>&quot; tab. </p>
<p>Here you have the option to modify:</p>
<ul>
<li>&quot;Block public access&quot; related policies</li>
<li>Access Control Lists - modifying and creating them</li>
<li>Bucket policies</li>
<li>CORS configurations - i.e. Cross-Origin Resource Sharing for HTTP access</li>
</ul>
<p>The &quot;Block public access&quot; policies are those that you set up while creating the S3 bucket. You can modify these settings here by clicking on the Edit button and can set this up at a granular level.</p>
<img src="/images/15848321455e769e91c01f4.png" alt="Permissions">
<p>You can set up much more granular access for the below using Access Control Lists:</p>
<ol>
<li>Bucket Owner</li>
<li>Public access</li>
<li>S3 log delivery group</li>
</ol>
<p>You can set up or revoke the access for below operations:</p>
<ul>
<li>List objects</li>
<li>Write objects</li>
<li>Read bucket permissions</li>
<li>Write bucket permissions</li>
</ul>
<img src="/images/15848321525e769e98475dc.png" alt="Access Control List">]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-storage-aws-04-setting-permissions-on-s3-bucket</link>
<pubDate>Thu, 19 Sep 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Storage - AWS - 03 Configuring S3 Bucket Properties</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>There are various settings on the <strong>S3 bucket</strong> that you can either toggle or configure. These properties can make your S3 bucket more secure or optimize performance or can help you to manage it better.</p>
<p>To access these properties, simply navigate to your S3 bucket and click on the <strong>Properties</strong> tab.</p>
<img src="/images/15848311515e769aafe830b.png" alt="Properties option">
<p>When you click on anyone, it expands to show the details. Many of the properties are simply toggle switches. Others require a bit more configuration.</p>
<p>Below you can view the settings for;</p>
<ul>
<li><strong>Versioning</strong> - allows you to keep a history of the changes. This is required as a pre-requisite for some features (that we will discuss in upcoming posts)</li>
<li><strong>Server access logging</strong> - to log all events at the Server level</li>
<li><strong>Static website hosting</strong> - This is one of the most useful feature of S3 buckets that is not available in Microsoft Azure. If you have a static website, i.e. simple HTML pages with no database etc. you can provide this as a website by simply copying all your files to a bucket and enabling this option. It provides you with a public URL that you can use to browse your website, starting with a startup page.</li>
<li><strong>Object-level logging</strong> - to log all events at the object level i.e. at the level of each file. It uses CloudTrail service which should be deployed in the same geo region as the bucket. </li>
</ul>
<img src="/images/15848311605e769ab832e84.png" alt="Properties details">
<p>Below is the setting for:</p>
<ul>
<li><strong>Default encryption</strong> - You can select to encrypt your objects i.e. all files in the bucket using either AES-256 or AWS-KMS.</li>
</ul>
<img src="/images/15848311665e769abe8af8c.png" alt="Details continued">
<p>You can scroll further down to access more advanced settings like:</p>
<ul>
<li><strong>Object lock</strong> - prevents objects from accidental deletion</li>
<li><strong>Tags</strong> - to organize resources and get better-categorized cost reporting</li>
<li><strong>Transfer acceleration</strong> - enables optimizations to transfer data at fast speeds</li>
<li><strong>Events</strong> - to set up notifications for specific events on the bucket</li>
<li><strong>Requester pays</strong> - using this option, the requester pays for the data access instead of the owner of the data</li>
</ul>
<img src="/images/15848311735e769ac54643e.png" alt="Advanced settings">
<p>Properties tab allows you loads of customizations on your S3 bucket in terms of how your data is stored and how it is accessed.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-storage-aws-03-configuring-s3-bucket-properties</link>
<pubDate>Wed, 18 Sep 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Storage - AWS - 02 Working with S3 bucket and uploading files</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>Once you have an S3 bucket in AWS, you can start uploading files to it and working with it. In this post, we will start working with the S3 bucket we created in an earlier post and will start uploading files to it.</p>
<p>To start, simply navigate to the S3 service and click on your bucket. It will take you to the <strong>Overview</strong> screen. </p>
<img src="/images/15848294615e7694151bdc7.png" alt="Overview">
<p>The first thing you should do is to <strong>create a folder</strong>. As a best practice, try to organize your files systematically as much possible. While creating the folder, you can set it's encryption settings.</p>
<img src="/images/15848294685e76941c685e0.png" alt="Creating folder">
<p>Once inside the folder (or even from the root level), you can create more folders or directly upload files by clicking on the Upload button. This launches the <strong>upload files</strong> wizard. Simply drag and drop your files and folders here to start the upload.</p>
<p>You not only just upload files but set some properties to it that we will visit next. </p>
<img src="/images/15848294755e769423ecc98.png" alt="Uploading file">
<p>Next, you set the <strong>permissions</strong> for the uploaded files. This defines who gets access to the files.</p>
<img src="/images/15848294835e76942b15a01.png" alt="Setting permissions ">
<p>You can set the storage class for the files uploaded. Standard is the default option. There are various other options that you can choose from based on your performance and cost requirements. Glacier storage tier is for the archival data, that is not available immediately but you can keep it archived for a long period of time. <strong>Note</strong> that you are billed based on your selection here so make sure that you select the appropriate Storage class.</p>
<img src="/images/15848295135e76944953802.png" alt="Setting properties">
<p>Scroll down to view more properties for <strong>Encryption</strong>, <strong>Metadata</strong>, and <strong>Tagging</strong>.</p>
<img src="/images/15848295205e76945073b50.png" alt="More properties">
<p>Finally, review all the settings and hit the Upload button to upload the files. </p>
<img src="/images/15848295275e76945788722.png" alt="Review and upload">
<p>Once the files are uploaded, you can select on each of these files and access a lot of actions from the &quot;Actions&quot; menu. These include downloading the files, changing it's storage class, adding tags, deleting it, etc.</p>
<img src="/images/15848295345e76945e37c43.png" alt="Actions">
<p>Now that we have started consuming the S3 bucket and it has some files in it, in the next few posts, we will delve a little deeper into the settings of an S3 bucket and it's functionality.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-storage-aws-02-working-with-s3-bucket-and-uploading-files</link>
<pubDate>Tue, 17 Sep 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Storage - AWS - 01 Creating Simple Storage Service or S3 buckets</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>AWS Simple Storage Service or S3 is the managed service provided by the AWS environment for storing your blobs like your photos, videos, and any other files. In this service, you create S3 buckets as resources. These are very similar to Microsoft Azure storage accounts.</p>
<p>To start, navigate to S3 service from All services.</p>
<img src="/images/15848281965e768f24a1f58.png" alt="S3 option">
<p>Here you will see the option to create a new bucket. Click on the button to get started.</p>
<img src="/images/15848282035e768f2bc5b44.png" alt="Creating bucket">
<p>Provide a name and region for the bucket. Note that the name can only contain small letters and numbers and has to be unique within AWS space. The region should be closer to where you or your customer is located.</p>
<img src="/images/15848282105e768f323d160.png" alt="Simple settings">
<p>When you scroll down you have settings to control the access to the S3 bucket. Default is to &quot;Block all public access&quot;. You can tweak the settings and select more granular access by unchecking the main checkbox and then selecting the ones that applies to your situation.</p>
<p>Also, to help prevent accidental deletion of your data you can enable the object locks on the S3 bucket.</p>
<img src="/images/15848282165e768f3886296.png" alt="Advanced settings">
<p>Once you hit the create button, the bucket is created for you. Next, we will start working with this S3 bucket.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-storage-aws-01-creating-simple-storage-service-or-s3-buckets</link>
<pubDate>Sun, 15 Sep 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Storage - Azure - 05 Configuring Firewall and vNet access on Storage accounts</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>To secure access to your <strong>Microsoft Azure Storage accounts</strong>, you can configure a firewall on these. You can allow only particular IP addresses or an address range (specified in CIDR format). Also, instead of specifying IP addresses you can configure access to only specific virtual networks (vNets) in your environment and therefore restrict access to other vNets. </p>
<p>To access these settings, navigate to your storage account and under the settings, click on the &quot;<strong>Firewall and virtual networks</strong>&quot; option.</p>
<img src="/images/15848208695e76728540e6d.png" alt="Firewall and vNet access">
<p>Here you have multiple options and ways to configure the firewall. Let's look at these in a systematic way:</p>
<ol>
<li>First, you can either leave the access open to all networks or limit it to a specific virtual network. If you select the latter option, you should also select your virtual network for which you want to provide access. You do so by clicking on the &quot;+Add existing virtual network&quot; option. It will open up a new blade and here you can find your existing vNet and add the same.</li>
<li>Next, you configure the Firewall and add IP addresses or a range of IP addresses (in CIDR format) for which the access needs to be opened up</li>
<li>Finally, you have the exceptions. Unless the data is very critical and confidential, I recommend that you should always allow access to the Microsoft services by selecting the first checkbox. It should be automatically checked by default. You can also allow access to storage account logging and metrics from any network here.</li>
</ol>
<p>Firewall and vNet access is generally overlooked option. But you should always configure this in your environment to secure access to your storage accounts.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-storage-azure-05-configuring-firewall-and-vnet-access-on-storage-accounts</link>
<pubDate>Tue, 10 Sep 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Storage - Azure - 04 Securely Accessing Storage Accounts</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>When working with Microsoft Azure <strong>Storage accounts</strong>, you will need to access these accounts either from a program or programmatically from a script or code. There are two primary ways to securely access your storage accounts. In this post, we will review these and find out how to access their settings from the portal.</p>
<h3>Access Keys</h3>
<p>The first is to use the <strong>Access keys</strong>. Linked to every storage account, there are two access keys. These are referred to as primary and secondary keys or simply key1 and key2. To access these keys, navigate to your storage account and then click on the &quot;Access keys&quot; under settings. Note that you only need one key (anyone) to be able to connect to the storage account. Now when you are using a tool like Storage Explorer or PowerShell scripts to connect to your storage account, you can use this key along with the name of the storage account to connect and perform operations. </p>
<img src="/images/15848161415e76600dcfaa4.png" alt="Access Keys">
<p><strong>Note</strong> that these keys are like keys to the kingdom for your storage accounts. You can't tweak the control and restrict it in any way. These keys are meant only for the administrators and should not be used by anyone else. You also can't set any kind of expiry on these keys. So once somebody has these keys they can keep on using these to access the storage accounts. The only way to change is to access the keys and click on the refresh arrow symbol next to the key1 or key2 text to regenerate these keys. As a best practice, you should always use SAS tokens i.e. Shared Access Signature keys to perform any operations.</p>
<h3>Shared Access Signature (SAS tokens)</h3>
<p>SAS tokens are the way you should be accessing the storage accounts, programmatically or otherwise. These are what you should be using (with an expiry date) to give temporary access to anybody in your organization or even outside if there is any sensitive data on your storage accounts. </p>
<p>From official page: &quot;A shared access signature (SAS) is a <strong><em>URI</em></strong> that grants restricted access rights to Azure Storage resources. You can provide a shared access signature to clients who should not be trusted with your storage account key but whom you wish to delegate access to certain storage account resources. By distributing a shared access signature URI to these clients, you grant them access to a resource for a specified period of time. An account-level SAS can delegate access to multiple storage services (i.e. blob, file, queue, table). Note that stored access policies are currently not supported for an account-level SAS.&quot;</p>
<img src="/images/15847709465e75af829cea1.png" alt="Shared access signature">
<p>Once you tweak all the settings for what kind of access you want to provide, then click on &quot;Generate SAS and connection string&quot; it will generate a Shared Access Signature (SAS token), and a connection string and various URIs using that SAS token. These URIs for storage account's blob, file share, tables and queue services. </p>
<img src="/images/15848180835e7667a3f1e03.png" alt="Shared access signature Connection string and URLs">
<p>Now you know the best way to connect to your storage accounts securely. Depending upon your situation ensure to use the correct approach. If in confusion, always go with Shared Access Signatures. </p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-storage-azure-04-securely-accessing-storage-accounts</link>
<pubDate>Sat, 07 Sep 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Storage - Azure - 03 Event driven automation of Azure Storage accounts</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>Microsoft Azure <strong>Storage accounts</strong> generate various events. You can automate these events and can take action when these events occur. </p>
<p>Build reactive, event-driven apps with a fully managed event routing service that is built into Azure. Event Grid helps you build automation into your cloud infrastructure, create serverless apps, and integrate across services and clouds.</p>
<p>Few examples could be:</p>
<ul>
<li>When a user uploads a large video file, a time-to-live value is assigned to that file and that file can either be archived or deleted after that time. </li>
<li>If some image is uploaded then immediately you can run Optical Character Recognition (OCR) on the image and can capture the data into a database.</li>
<li>If someone uploads a document file then automatically that is processed and the details captured can be input into a database if the document adheres to a certain format</li>
<li>Generating transcripts for video uploads etc. </li>
</ul>
<p>Note that this feature is available out of the box in V2 storage accounts. To access, simply navigate to your storage account resource and click on the Events option. </p>
<img src="/images/15847641095e7594cd8119f.png" alt="Even driven automation">
<p>The solution actually leverages other Azure resources to provide this automation. Some of these include:</p>
<ol>
<li>Azure <strong>Event Grid</strong> - it has built-in support for events coming from Azure Storage blobs</li>
<li>Azure <strong>Logic Apps</strong> (serverless workflow and integration)</li>
<li>Azure <strong>Functions</strong> (serverless code)</li>
</ol>
<p>You create a subscription to Event Grid. It receives the event notification and passes the information to the handlers to take some actions. Logic Apps and Azure Functions are two such handlers where you can write the automation. We will delve deeper with an example in a future post.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-storage-azure-03-event-driven-automation-of-azure-storage-accounts</link>
<pubDate>Tue, 03 Sep 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Storage - Azure - 02 Data Transfer Options</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>When you have a Storage account in Microsoft Azure, you want to transfer data to the blob storage to consume the storage. The initial data transfer can be huge and you want to optimize this data transfer. </p>
<p>Navigate to your Storage account and click on &quot;<strong>Data transfer</strong>&quot; setting to view all the options that you have. Use this section to define your estimated data size for transfer, approximate estimated network bandwidth and transfer frequency. Based on your selection this wizard will show you relevant options that will best suit you to transfer your data.</p>
<img src="/images/15847543155e756e8b95377.png" alt="Data Transfer options">
<p>Let look at these options closely. The options you have are discussed below.</p>
<h3>1. AzCopy</h3>
<p>This is a command-line utility that allows the scripted or programmatic transfer.</p>
<ul>
<li>A command-line data transfer utility </li>
<li>Copy data to and from Azure blobs, files, tables</li>
<li>Best use: Resilient bulk data transfer at high throughput</li>
</ul>
<p>You can learn more about this service here: <a href="https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10?toc=%2fazure%2fstorage%2fblobs%2ftoc.json" target="_blank">AzCopy</a></p>
<h3>2. Azure PowerShell or Azure CLI</h3>
<p>These are actually two different options for scripted or programmatic transfer. </p>
<ul>
<li>A command-line interface to manage Azure Resources</li>
<li>Best use: Build scripts to manage Azure resources and small data sets</li>
<li><strong>Azure PowerShell</strong> installs on Windows or use in browser with Azure Cloud Shell</li>
<li><strong>Azure CLI</strong> installs on macOS, Linux, Windows or use in browser with Azure Cloud Shell</li>
</ul>
<p>You can learn more about Azure PowerShell here: <a href="https://docs.microsoft.com/en-us/powershell/azure/get-started-azureps?view=azps-3.6.1&viewFallbackFrom=azps-1.4.0" target="_blank">Azure PowerShell</a>
You can learn more about Azure CLI here: <a href="https://docs.microsoft.com/en-us/powershell/azure/get-started-azureps?view=azps-3.6.1&viewFallbackFrom=azps-1.4.0" target="_blank">Azure CLI</a></p>
<h3>3. Directly from the Azure Portal</h3>
<p>This is the easiest option from all the options as using this you are directly uploading the data from the Azure portal.</p>
<ul>
<li>A web-based interface</li>
<li>Explore files and upload new files one at a time</li>
<li>Best use: If you don’t want to install tools or issue commands</li>
</ul>
<p>You can learn more about this service here: <a href="https://docs.microsoft.com/en-us/azure/storage/common/storage-solution-small-dataset-low-moderate-network" target="_blank">Data transfer for small datasets with low to moderate network bandwidth</a></p>
<h3>4. Azure Data Factory</h3>
<p>Here you are using managed data pipelines and have the most control over your data coming into the Azure storage account. </p>
<ul>
<li>A hybrid data integration service with enterprise-grade security</li>
<li>Create, schedule, manage data integration at scale</li>
<li>Best use: Build recurring data movement pipelines</li>
</ul>
<p>You can learn more about this service here: <a href="https://azure.microsoft.com/en-us/services/data-factory/" target="_blank">Azure Data Factory</a></p>
<h3>5. Azure Storage Explorer</h3>
<p>Another Graphical interface (GUI) option provided through a standalone software utility that you can install on your laptop or desktop.</p>
<ul>
<li>A GUI-based cross-platform client</li>
<li>Upload or download from Azure blobs, files, tables, queues, and Azure Cosmos DB entities</li>
<li>Best use: Easy file management</li>
</ul>
<p>You can learn more about this service here: <a href="https://azure.microsoft.com/en-us/features/storage-explorer/" target="_blank">Azure Storage Explorer</a></p>
<h3>6. Azure Storage REST API/SDK</h3>
<p>This option is directly leveraging the underlying APIs/SDK to transfer the data. This is a highly customizable option. You can even build your own utilities using this option. This option also allows for scripted or programmatic transfer by writing your own custom code to transfer the data.</p>
<ul>
<li>Programmatic access to Blob, Queue, Table, and File services in Azure</li>
<li>Best use: Build your custom applications</li>
</ul>
<p>You can learn more about this service here: <a href="https://docs.microsoft.com/en-us/rest/api/storageservices/" target="_blank">Azure Storage REST API Reference
</a></p>
<h3>7. Azure File Sync</h3>
<p>This is the continuous sync and site-local caching option.</p>
<ul>
<li>Move files from a server to a cloud-native Azure file share with zero downtime</li>
<li>Up to 100 TiB capacity per Azure file share</li>
<li>Multi-site sync to multiple servers</li>
</ul>
<p>You can learn more about this service here: <a href="https://docs.microsoft.com/en-us/azure/storage/files/storage-sync-files-planning" target="_blank">Planning for an Azure File Sync deployment
</a></p>
<h3>8. Azure Data Box Edge and Data Box Gateway</h3>
<p><strong>Azure Data Box Edge</strong> is an on-premises device.</p>
<ul>
<li>On-premises Microsoft physical network device. It supports SMB/NFS</li>
<li>Edge compute processes data in local cache before fast, low bandwidth usage transfer to Azure</li>
<li>Best use: Preprocess data, inference Azure ML, continuous ingestion, incremental transfer</li>
</ul>
<p>Whereas, <strong>Azure Data Box Gateway</strong> is a virtual device, also sitting on-premises. </p>
<ul>
<li>On-premises virtual network device in your hypervisor</li>
<li>Local cache-based fast, low bandwidth usage transfer to Azure over SMB/NFS</li>
<li>Best use: Continuous ingestion, cloud archival, incremental transfer</li>
</ul>
<p>You can learn more about this service here: <a href="https://azure.microsoft.com/en-us/products/azure-stack/edge/" target="_blank">Azure Stack Edge</a></p>
<h3>9. Azure Data Box and Data Box Disk</h3>
<p>Azure Data Box is the offline transfer via a device. </p>
<p><strong>Time to transfer</strong>: Typically 80 TB/order in 10-20 days</p>
<ul>
<li>A 100 TB (80 TB usable) rugged, encrypted device shipped by Microsoft</li>
<li>Offline transfer to Azure Files, blobs, managed disks, over SMB/NFS/REST</li>
<li>Best use: Initial/recurring bulk data transfer for medium to large data sets</li>
</ul>
<p>Azure Data Box Disk is very similar to Data Box. it is also offline transfer via a device (in this case a disk).</p>
<p><strong>Time to transfer</strong>: Typically 35 TB/order in 5-10 days</p>
<ul>
<li>A 40 TB (35 TB usable) set of up to five, encrypted, 8 TB SSDs shipped by Microsoft</li>
<li>Mount USB 3.0/SATA disks as drives for transfer to Azure Files, blobs, managed disks</li>
<li>Best use: Initial/recurring bulk transfer for small to medium data sets</li>
</ul>
<p>You can learn more about Azure Data box service here: <a href="https://azure.microsoft.com/en-us/services/databox/" target="_blank">Azure Data Box</a></p>
<h3>10. Azure Import/Export</h3>
<p>This is another offline transfer device option. Key features are:</p>
<ul>
<li>Ship up to 10 of your own disks to transfer data to and from Azure</li>
<li>Mount disks as drives for offline transfer to Azure Files, Blobs</li>
<li>Best use: Initial bulk transfer for small to medium data sets</li>
</ul>
<p>You can learn more about this service here: <a href="https://docs.microsoft.com/en-us/azure/storage/common/storage-import-export-service" target="_blank">Azure Import/Export</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-storage-azure-02-data-transfer-options</link>
<pubDate>Sun, 01 Sep 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Storage - Azure - 01 Creating Azure Storage Account</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p><strong>Note</strong>: This blog has been updated with Private endpoints feature and latest screenshots.</p>
<p>As we saw earlier, Microsoft Azure's <strong>Storage account</strong> is a managed service for storing your blobs like your photos, videos, and any other files. Microsoft has bundled below services within Storage Accounts:</p>
<ul>
<li><strong>Blob storage</strong> - to securely store your blobs like photos, videos, and other files, etc.</li>
<li><strong>File shares</strong> - SMB file shares</li>
<li><strong>Table storage</strong> - Tabular storage to store non-relational data</li>
<li><strong>Queue storage</strong> - to scale apps</li>
</ul>
<p>These are very similar to the AWS S3 buckets.</p>
<p>To access Microsoft Azure Storage Accounts, navigate to All services -&gt; Storage category -&gt; Storage accounts. </p>
<img src="/images/15847321445e7517f07eece.png" alt="Storage accounts">
<p>Here you can view all your existing storage accounts (if you have any). Click on the &quot;+Add&quot; button to create a new Storage account.</p>
<img src="/images/15847481625e75568229207.png" alt="Add new Storage Account">
<p>For the basic settings, start with selecting the right subscription and creating or using an existing Resource Group.</p>
<img src="/images/15847481685e755688bae7a.png" alt="Basic settings">
<p>Provide a name for the storage account. The name can only contain small letters and have to be unique in Microsoft DNS space. Select the geo location for the deployment. Next, you have the performance setting between standard and premium.</p>
<p>You also select between the two account kinds. There is version 1 and version 2. For all new storage accounts select Version 2.</p>
<p>You decide the replication strategy and select the type of storage account based on the replication. You have options like Zone redundant storage (ZRS), Locally redundant storage (LRS), Globally redundant storage (GRS) and Read Accessible - Globally redundant storage (RA-GRS). For maximum replication, select RA-GRS.</p>
<p>Next, decide if you want Hot or Cool access tier. The default is Hot. Cool is for archival storage. </p>
<img src="/images/15847481765e755690cf5e2.png" alt="Instance details">
<p>Under networking you have 3 options for connectivity:</p>
<ul>
<li>Public endpoint (all networks)</li>
<li>Public endpoint (selected networks)</li>
<li>Private endpoint</li>
</ul>
<p>The private endpoint is the latest option that assigns a network interface (and therefore a private IP address) to your storage account. With this you can access your storage account leveraging the nic card, securely from your virtual network, as if it was just another resource on that network.</p>
<img src="/images/15847481845e75569840199.png" alt="Networking Details">
<p>Under advanced settings, you select the security, large file shares, data protection, data lake storage etc.</p>
<img src="/images/15847515165e75639c25286.png" alt="Advanced settings">
<p>You can apply tags next to categorize the resource. It is optional but is highly recommended as a best practice.</p>
<img src="/images/15847515235e7563a323b9b.png" alt="Tags">
<p>Finally, review and create the storage account.</p>
<img src="/images/15847515295e7563a98bbc4.png" alt="Review and Create">]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-storage-azure-01-creating-azure-storage-account</link>
<pubDate>Wed, 28 Aug 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Storage - Microsoft Azure Storage Accounts vs AWS Simple Storage Service (S3)</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>For storing your blobs like your photos, videos, and any other files, both the cloud platform have great services to offer. AWS calls its service as <strong>Simple Storage Service</strong> or more popularly known as <strong>S3</strong>. Microsoft Azure calls its service as <strong>Storage Accounts</strong>. Microsoft has bundled below services within Storage Accounts:</p>
<ul>
<li><strong>Blob storage</strong> - to securely store your blobs like photos, videos, and other files, etc.</li>
<li><strong>File shares</strong> - SMB file shares</li>
<li><strong>Table storage</strong> - Tabular storage to store non-relational data</li>
<li><strong>Queue storage</strong> - to scale apps</li>
</ul>
<p>From all these services, Azure blob storage within Storage accounts is the key comparable service to S3.  </p>
<p>To access AWS S3, you can do so by navigating to all Services and navigating to S3 under the Storage category as shown below.</p>
<img src="/images/15847321275e7517dfd8bdb.png" alt="AWS S3">
<p>To access Microsoft Azure Storage Accounts, navigate to All services -&gt; Storage category -&gt; Storage accounts. </p>
<img src="/images/15847321445e7517f07eece.png" alt="Storage accounts">
<p>We will review how to get started with each of these services in the next couple of blogs.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-storage-microsoft-azure-storage-accounts-vs-aws-simple-storage-service-s3</link>
<pubDate>Sat, 24 Aug 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Azure Managed Disks vs AWS Elastic Block Store (EBS)</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>Both Microsoft Azure and AWS provides a managed service for managing the disks of your virtual machine (or EC2 instance). It is the storage optimized for I/O intensive read/write operations. For use as high-performance Azure virtual machine storage. Microsoft Azure offering is called &quot;<strong>Managed disk</strong>&quot; whereas AWS calls it &quot;<strong>Elastic Block Store (EBS) Volume</strong>&quot;. </p>
<h3>AWS - Elastic Block Store (EBS) Volume</h3>
<p>Let's look at AWS's offering first. When you create EC2 instances, you automatically get at least one volume (i.e. disk) for the OS disk. You can additionally add multiple volumes for the data disk. You can view all this data by navigating to EC2 and then under the settings go to the &quot;Elastic Block Store&quot; category. You will find your <strong>Volumes</strong> listed here. </p>
<p>There is also an option to view the <strong>Snapshots</strong> which shows you any snapshots taken on the volume, taken manually or from AWS Backup service.</p>
<img src="/images/15846787265e7447469a267.png" alt="Elastic Block Store (EBS) Volume">
<h3>Microsoft Azure Managed disks</h3>
<p>Within Microsoft Azure, the disks of Virtual Machines or VMs used to be VHD files sitting on a Storage account (similar to S3 bucket). Microsoft upgraded its offering to Managed Disks with loads of granular controls and optimizations. These disks can be switched from standard HDD to premium SDD by just switching a value from a dropdown. You can also control role-based access control of individual disks (a feature that was not available before in Microsoft Azure). </p>
<p>You can find the managed disks under All services -&gt; Compute category. Click on Disks to view all the disks in your environment. </p>
<p>If you want to view a specific disk on your VM then you can also navigate to your virtual machine and then find the Disks setting. The OS and any data disks will also be listed there. </p>
<img src="/images/15846787485e74475c6b65d.png" alt="Managed Disks">
<p>When you click on any one of the managed disk, it takes you to the overview page and you can see various details about the disk including the &quot;Owner VM&quot; to which this disk is connected. You can directly take a snapshot of the disk from here.</p>
<img src="/images/15846787545e7447627a2b4.png" alt="Overview and Create Snapshot">
<p>Under settings of the disk, one notable setting is the <strong>Configuration</strong> of the disk. You can quickly select the Storage type and Size of the disk. </p>
<img src="/images/15846787605e74476830e4b.png" alt="Storage type and size">
<p>For more information, please use these links:</p>
<ul>
<li><a href="https://azure.microsoft.com/en-us/services/storage/disks/" target="_blank">Azure managed disks</a></li>
<li><a href="https://aws.amazon.com/ebs/?ebs-whats-new.sort-by=item.additionalFields.postDateTime&ebs-whats-new.sort-order=desc" target="_blank">Elastic Block Store (EBS)</a></li>
</ul>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-azure-managed-disks-vs-aws-elastic-block-store-ebs</link>
<pubDate>Wed, 21 Aug 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Auto Scaling - AWS - 05 AWS Auto Scaling service</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p><strong>AWS Auto Scaling</strong> is the latest offering from AWS that enables you to quickly discover all scalable resources in your environment and then set up application scaling for you. You can either choose one of the AWS recommended settings to optimize performance or cost or both. Or you can also define your own custom scaling conditions. </p>
<p>You start by navigating to the AWS Auto Scaling by going to All Services and then navigating to the Management and Governance section.</p>
<img src="/images/15846760855e743cf508966.png" alt="AWS Auto Scaling">
<p>You can create multiple scaling plans here. Start creating your scaling plan by clicking on the &quot;Get started&quot; button.</p>
<img src="/images/15846760935e743cfd3ad6c.png" alt="Create a Scaling Plan">
<p>Next, you find the scalable resources in the AWS environment. You can do so in various ways. You can search for resources on the AWS CloudFormation stack. Or you can search by Tags applied to the resources. Or you can directly select EC2 Auto Scaling groups. </p>
<p>As shown below, we have selected our auto scaling group, that we created in one of the previous blog posts. </p>
<img src="/images/15846761005e743d0417140.png" alt="Finding Scalable resources">
<p>In the next step, you have the option to specify the scaling strategy. You start by providing a name and then scroll down for more details. </p>
<img src="/images/15846761065e743d0a6c3da.png" alt="Specify Scaling Strategy">
<p>You can either select one of the predefined auto scaling strategies optimized for availability or cost or both. You can even define your own Custom strategy and have more granular control over these settings. </p>
<p>Expand the Configuration details to view the applied settings. </p>
<img src="/images/15846761435e743d2f179b0.png" alt="Selecting or Defining the Scaling Strategy">
<p>Next up you have various advanced settings like dynamic scaling and predictive scaling. To configure, first, select your auto scaling group and then expand the category you want to tweak. </p>
<img src="/images/15846761505e743d364c23a.png" alt="Advanced Settings">
<p>Finally, you review all the settings and create your scaling plan. </p>
<img src="/images/15846761565e743d3cc39bf.png" alt="Review and Create">
<p>Once the scaling plan is ready, it will do most of the heavy lifting for you by automating the scaling in and scaling out of the instances based on the scaling strategies defined in your plan. </p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-auto-scaling-aws-05-aws-auto-scaling-service</link>
<pubDate>Mon, 19 Aug 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Auto Scaling - AWS - 04 Creating Auto Scaling Groups - Part 2</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>This is part 2 of the blog where we are creating the <strong>Auto Scaling Group</strong> in AWS. Let's dive right in, where we left off in the last blog.</p>
<p>Now we are at <strong>Step 3</strong>, here we define the optional Load balancing details and Health checks. Later is something you should always have configured as a best practice.</p>
<img src="/images/15846744185e7436723f0ec.png" alt="Step 3 - Load balancing and health checks">
<p>In Step 4 we will configure the group size and scaling policies. You provide a minimum and maximum no. of instances along with a desired capacity which becomes the default no. of instances. </p>
<img src="/images/15846744265e74367a0201a.png" alt="Step 4 - Group size and scaling policies">
<p>Scrolling down you have the scaling policies. I recommend having a &quot;Target tracking scaling policy&quot;. There are different criteria that you can configure. In the screenshot below, you have the policy based on the average CPU utilization.</p>
<img src="/images/15846744325e7436803c007.png" alt="Optional scaling policies">
<p>You can configure various notifications on your auto scaling group next. </p>
<img src="/images/15846744395e7436875ccda.png" alt="Notifications">
<p>Finally, you have the option to configure tags on the auto scaling group.</p>
<img src="/images/15846744735e7436a98e19d.png" alt="Tags">
<p>Review all the settings and create the auto scaling group. That's all there is to it. Simulate some load on your application and monitor how the instances scale out and in compliance with your scaling policies. </p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-auto-scaling-aws-04-creating-auto-scaling-groups-part-2</link>
<pubDate>Thu, 15 Aug 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Auto Scaling - AWS - 04 Creating Auto Scaling Groups - Part 1</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p><strong>Auto Scale Groups</strong> is how you can have multiple instances hosting the same application running at the same time in AWS. This is what provides you with high availability for your applications deployed on EC2 instances in AWS. These scale in and out based on your scaling conditions. You can create auto scale groups in AWS using any one of the Launch Template or Launch configurations. Templates are the recommended way to go. </p>
<p>You start by navigating to the EC2 in AWS.</p>
<img src="/images/15846734155e743287b87b6.png" alt="EC2">
<p>Scroll all the way down and navigate to <strong>Auto Scaling Groups</strong>. Click on the &quot;Create Auto Scaling group&quot; to start the creation wizard.</p>
<img src="/images/15846734225e74328e1cd3e.png" alt="Auto Scaling Groups">
<p>Right on the first screen, you provide a name for the Auto Scaling group. You also select if you want to use Launch Template or Launch Configurations. Whichever you decide you select the appropriate value from the drop-down.</p>
<img src="/images/15846734275e743293ca4b1.png" alt="Choosing Launch Template or Configuration">
<p>Once you select the template it shows you detailed settings that are configured inside the template.</p>
<img src="/images/15846734345e74329a11ad3.png" alt="Providing Launch Template">
<p>Under Step2, you configure the purchase options and instance types. You have the option of adhering to the template configurations as shown below. </p>
<img src="/images/15846734645e7432b887b48.png" alt="Configure Settings - Adhering to the launch template">
<p>Or you have the option of combining purchase options and instance types along with the settings from the template. You can provide cost optimization and instance type details here.</p>
<img src="/images/15846734715e7432bf37e28.png" alt="Configure Settings - Combine purchase options and instance types">
<p>We are only halfway here. We will continue creating the Auto Scaling Group in part 2 of this blog. </p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-auto-scaling-aws-04-creating-auto-scaling-groups-part-1</link>
<pubDate>Wed, 14 Aug 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Auto Scaling - AWS - 03 Creating Launch Templates</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p><strong>Launch Templates</strong> is the information about how to create an EC2 instance. It has all the relevant information to launch a new EC2 instance. It contains other instance-level settings, such as the Amazon Machine Image (AMI), instance type, key pair, and security groups, etc. This helps you to provide a predictable and repeatable way of deploying multiple EC2 instances in your environment.</p>
<p>Please note that these provide similar functionality as <strong>Launch Configurations</strong> (which we will saw in the previous blog). Launch Templates is a new way of doing things. You can have multiple versions of a launch template but not for a launch configuration. Any new features are being added to the Launch Templates and that is what AWS recommends to use. </p>
<p>Launch Template (or Launch Configuration) is also required for creating Auto Scaling Groups in AWS.</p>
<p>Start by navigating to EC2 in AWS.</p>
<img src="/images/15846717205e742be88d499.png" alt="EC2">
<p>Navigate to the &quot;Launch Templates&quot; under the Instances category. If you will not have any templates here then you will be presented with the general information. Click on the &quot;Create launch template&quot; to start creating a new template.</p>
<img src="/images/15846717275e742befc0f92.png" alt="Launch Templates">
<p>You start by providing basic information like template name, version description,etc. Note that the version is one of the features that sets a launch template apart from a launch configuration in AWS. The later can not have any version. </p>
<p>If you want to use this template with the Auto Scaling then make sure to check the box to provide guidance as shown below.</p>
<img src="/images/15846717335e742bf5bca22.png" alt="Create Launch Template Wizard">
<p>Next provide the AWS Machine Image, Instance type details and Key pair information for login. </p>
<img src="/images/15846717435e742bff6dbf1.png" alt="Contents - AMI, Instance Type and Key Pair">
<p>Next, you provide the networking details under Virtual Private Cloud to define the connectivity along with a security group. You also provide the Storage information. </p>
<img src="/images/15846717725e742c1c1fd96.png" alt="VPC and Storage">
<p>Finally, you can add tags to your launch template. You also should configure at least one network interface for your EC2 instances. Click on &quot;Create launch template&quot; to create the launch template with all the selected settings.</p>
<img src="/images/15846717785e742c2218929.png" alt="Tags and Network Interfaces">
<p>Now that we have the Launch Template, we are ready to use it to create Auto Scaling Groups. We will check this in the upcoming blog. </p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-auto-scaling-aws-03-creating-launch-templates</link>
<pubDate>Sat, 10 Aug 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Auto Scaling - AWS - 02 Creating Launch Configurations</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p><strong>Launch Configuration</strong> is the configuration information about how to create an EC2 instance. It has all the relevant information to launch a new EC2 instance. It also contains other instance-level settings, such as the Amazon Machine Image (AMI), instance type, key pair, and security groups, etc. This helps you to provide a predictable and repeatable way of deploying multiple EC2 instances in your environment.</p>
<p>Please note that these provide similar functionality as <strong>Launch Templates</strong> (which we will check in the next blog). This is the older way of doing things. You can have multiple versions of a launch template but not for a launch configuration. Any new features are being added to the Launch Templates and that is what AWS recommends to use. </p>
<p>This (or Launch Template) is also required for creating Auto Scaling Groups in AWS.</p>
<p>You start by navigating to EC2 in AWS and then clicking on Launch Configurations under &quot;Auto Scaling&quot; category as shown below. Click on &quot;Create launch configuration&quot; to launch the wizard. </p>
<img src="/images/15846706805e7427d88a518.png" alt="Option to create Launch Configurations">
<p>The first thing you do, similar to creating a new EC2 instance, is to select an AWS Machine Image or AMI.</p>
<img src="/images/15846706865e7427de7cc0d.png" alt="Selecting AMI">
<p>Next, you select the Instance type. This defines your vCPUs and Memory for the underlying EC2 instance (that will be built using these configurations).</p>
<img src="/images/15846706935e7427e5451a5.png" alt="Instance Type">
<p>Next, you provide a name for your launch configurations. You can also configure to use Spot instances here to save on cost (for any non-critical workloads). You can also configure CloudWatch detailed monitoring. </p>
<img src="/images/15846707005e7427ec0804d.png" alt="Launch Configuration identifiers">
<p>Next up are the Storage settings. You update the settings for the OS disk and add more volumes if you need.</p>
<img src="/images/15846707345e74280ecd65e.png" alt="Storage information">
<p>Next, you provide the security group configurations. You can create a new one or reuse one of the existing ones.</p>
<img src="/images/15846707415e742815f1da6.png" alt="Security Group configurations">
<p>Finally, you review the settings and create your launch configurations.</p>
<img src="/images/15846707485e74281cb7c57.png" alt="Review and create">
<p>Before the launch configurations can be completed, you are also prompted to provide a key pair to be used to authenticate.</p>
<img src="/images/15846707545e742822d5c82.png" alt="Key Pair settings">
<p>That's it! Now you have a reusable configuration that you can use to build multiple EC2 instances in a predictable fashion.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-auto-scaling-aws-02-creating-launch-configurations</link>
<pubDate>Mon, 05 Aug 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Auto Scaling - AWS - 01 All Related Resources for Auto Scaling</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>AWS has various features that provide autoscaling features. Each resource type has it's unique functionality and these resources can work with each other. </p>
<p>These services are:</p>
<ul>
<li><strong>Launch Templates</strong> - the latest addition - this is a template for the EC2 instances. It has all the relevant information to launch a new EC2 instance. You can configure the compute, storage, networking, etc. inside your launch template. It also contains other instance-level settings, such as the Amazon Machine Image (AMI), instance type, key pair, and security groups, etc. This helps you to provide a predictable and repeatable way of deploying multiple EC2 instances in your environment.</li>
<li><strong>Launch Configurations</strong> - this is very similar to launch templates. This is the older way of doing things. You can have multiple versions of a launch template but not for a launch configuration. </li>
<li><strong>Auto Scale Groups</strong> - This is how you can have multiple instances hosting the same application running at the same time in AWS. This is what provides you with high availability. These scale in and out based on your scaling conditions. You can create auto scale groups in AWS using any one of the Launch Template or Launch configurations. Templates are the recommended way to go. </li>
<li><strong>AWS Auto Scaling</strong> - This is the latest offering from AWS that enables you to quickly discover all scalable resources in your environment and then set up application scaling for you. You can either choose one of the AWS recommended settings to optimize performance or cost or both. Or you can also define your own custom scaling conditions. </li>
</ul>
<p>In the next few blog posts, we will take a practical look at these resources and how to work with each of these. </p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-auto-scaling-aws-01-all-related-resources-for-auto-scaling</link>
<pubDate>Sat, 03 Aug 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure AD activity logs now integrated with Log Analytics in Azure Monitor</title>
<description><![CDATA[<p>I believe that there should be a <strong>single pane of glass</strong> approach to monitoring in Microsoft Azure where you can monitor different aspects of Azure in one place. Microsoft's latest logs integration is a step forward towards this vision. Now Azure AD activity logs are integrated with the Diagnostics Logs for Azure Monitor and Log Analytics in Azure Monitor. These Azure AD logs can now be retained for long term as well, leveraging Azure Storage accounts.</p>
<p>One side benefit is that these logs can now be sent to any 3rd party SIEM tools as well.  </p>
<p>To get started with this service, simply navigate to your Azure Active Directory and then scroll all the way down to Diagnostic settings. Click on &quot;<strong><em>+Add Diagnostic Setting</em></strong>&quot; to get started.</p>
<img src="/images/15851017635e7abbc3a48ec.png" alt="Azure AD Diagnostic Settings">
<p>Select the log type for &quot;Audit logs&quot; and for destination select &quot;Send to Log Analytics&quot; and configure your workspace. Save your settings.</p>
<img src="/images/15851017755e7abbcfde306.png" alt="Integration with Log Analytics">
<p>Now the Azure AD logs will start appearing in the Log Analytics workspace. You can query this data and can analyze the data in various ways as well.</p>
<p>For more information check this official blog on this functionality in detail: </p>
<ul>
<li><a href="https://techcommunity.microsoft.com/t5/azure-active-directory-identity/azure-active-directory-activity-logs-in-azure-log-analytics-now/ba-p/274843" target="_blank">Azure Active Directory Activity logs in Azure Log Analytics</a></li>
<li><a href="https://docs.microsoft.com/en-us/azure/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics" target="_blank">How to set up the integration</a></li>
</ul>]]></description>
<link>https://HarvestingClouds.com/post/azure-ad-activity-logs-now-integrated-with-log-analytics-in-azure-monitor</link>
<pubDate>Thu, 25 Jul 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure SQL Database now has a Serverless Compute Tier</title>
<description><![CDATA[<p>Azure SQL Database now has a <strong>Serverless</strong> Compute Tier. Note that this option is only available with the newer vCore based pricing. This provides you with loads of advantages as this optimizes the pricing and enables all the goodness of the Serverless architecture for your SQL Databases.</p>
<p>The key points are:</p>
<ul>
<li>The underlying compute resources (i.e. CPU and memory) are auto-scaled based on the demand</li>
<li>The billing is done on a per-second basis based on the usage of vCores</li>
</ul>
<p>When you go to Configure pricing on your database (either during the creation or afterward from the settings), under the vCore based pricing model you now select the <strong>Serverless</strong> option. </p>
<img src="/images/15851070515e7ad06b0dc18.png" alt="Azure SQL Database Serverless Compute Tier">
<p>For more information please check this link: <a href="https://docs.microsoft.com/en-us/azure/sql-database/sql-database-serverless" target="_blank">Azure SQL Database serverless</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-sql-database-now-has-a-serverless-compute-tier</link>
<pubDate>Wed, 24 Jul 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Auto Scaling - Azure - 04 Viewing the Run History of VMSS</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>When you have an active Microsoft Azure <strong>Virtual Machine Scale Set</strong> or <strong>VMSS</strong> it is imperative that you monitor how your VMSS has been running over the period of time. This helps you tweak the autoscaling settings of your VMSS. You should monitor this continuously for any critical applications in your environment. Ideally, to begin with, this should be done in a test or QA environment with a simulated load on your application running for a long period of time. And then you should keep doing this in upper environments as a best practice. </p>
<p>To access this setting, navigate to your VM scale set and navigate to the &quot;Scaling&quot; option under the Settings menu. Here click on the &quot;<strong>Run history</strong>&quot; tab. All the history for how the instances were scaled up and down will be visible here. You can filter the data based on time to reduce the window. At the bottom you also have Operations list view what operations occurred on the VMSS and at what times. </p>
<img src="/images/15845936465e72faeeb9450.png" alt="Scaling Run History">
<p>This option should be used in conjunction with the performance data for the app or workloads running on your VMSS for a complete picture. If you view any performance drops, you should tweak the settings to scale out earlier than what it is configured. E.g. If the scale out operation is configured if the Average CPU percentage goes above 75% and you still see performance hits, then you should tweak that setting from 75% to a lower value like 65%. </p>
<p>There may also be some times during the day or week when there will be more load on your application. To ease this, you can add a schedule for scaling based on time. Note that there is no single best combination. The best settings for you depends upon your application. </p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-auto-scaling-azure-04-viewing-the-run-history-of-vmss</link>
<pubDate>Sat, 20 Jul 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Auto Scaling - Azure - 03 Updating the Scaling policies of VMSS</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>In Microsoft Azure <strong>Virtual Machine Scale Sets</strong> or <strong>VMSS</strong>, understanding and setting up appropriate scaling settings is paramount. You have the option to either set the scaling settings manually or set them to auto-scale based on different criteria. </p>
<p>To access these settings, navigate to your VMSS and go to the <strong>Scaling</strong> option under Settings. Ensure that &quot;Custom autoscale&quot; is selected. Then click on the &quot;<strong>Configure</strong>&quot; button from the top.</p>
<img src="/images/15845924135e72f61d7810a.png" alt="Scaling configurations">
<p>In the next screen, you can &quot;scale based on a metric&quot; to scale based on different metrics on the VM. Examples of such metrics include CPU percentage, Memory consumption, etc. Here you define: </p>
<ul>
<li><strong>Scale out</strong> - This is the setting for how the instance count should <em>increase</em>. E.g. if the Average CPU percentage increases beyond 75% then increase the count by 1.</li>
<li><strong>Scale in</strong> - This is the setting for how the instance count should <em>decrease</em>. E.g. if the Average CPU percentage decreases below 25% then decrease the count by 1.</li>
</ul>
<p>You also have the option to scale to a specific instance count based on a time schedule.</p>
<p>Another setting you have control over here is to set the <strong>Instance Limits</strong>. You define a minimum and a maximum number of instances with what the default or an initial number of instances is going to be. Note that the default value can only be between a minimum and a maximum number of instances. </p>
<img src="/images/15845924185e72f622efeb6.png" alt="Scaling rules">
<p>Another key point to note is that you can add multiple scaling conditions to ensure autoscaling occurs as you require. Now that you have the autoscaling set up, next you should simulate load on your application running on VMSS and validate that you are not facing any performance issues. If you do then you should tweak the autoscaling numbers.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-auto-scaling-azure-03-updating-the-scaling-policies-of-vmss</link>
<pubDate>Thu, 18 Jul 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Auto Scaling - Azure - 02 View Running Instances in VMSS</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>When you have a Microsoft Azure <strong>Virtual Machine Scale Set</strong> or <strong>VMSS</strong> you want to view how many and what instances are running in your VMSS. You can do so very easily by navigating to your VMSS. </p>
<p>From All Service -&gt; go to Virtual machine scale sets. Click on your VMSS to open up in it's own blade. Then from the left hand menu, navigate to the Settings category and click on <strong>Instances</strong>. This will show you all the instances in your VMSS along with their status. </p>
<p>E.g. in the screen below, you can view 3 instances, 1 of which is in Running state and 2 are in Creating state. </p>
<img src="/images/1111111.png" alt="Running Instances in VMSS">]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-auto-scaling-azure-02-view-running-instances-in-vmss</link>
<pubDate>Mon, 15 Jul 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Auto Scaling - Azure - 01 Creating VMSS - Part 2</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>This is part 2 of the post for creating Microsoft Azure <strong>Virtual Machine Scale Sets</strong> or <strong>VMSS</strong>. Let's dive right in.</p>
<p>In continuation of the creation of VMSS wizard, next up you have the <strong>Scaling</strong> options. You provide the initial instance count. If you want to set up autoscaling, I recommend you to set up Custom policy rather than manual. You can automate based on various conditions like CPU exceeding certain thresholds etc. You can also tweak these settings once the VMSS has been set up and can add multiple criteria.</p>
<img src="/images/15845877715e72e3fb7a2ce.png" alt="Scaling Options">
<p>Next, you have Management settings. Here you can provide Upgrade policies, set up monitoring for diagnostics, Identity, Automatic OS upgrades, Instance termination notifications etc. </p>
<img src="/images/15845877795e72e40354599.png" alt="Management settings">
<p>To monitor health, you can set up rules here. E.g. Let's say you have a web app running on the VMs in VMSS. You can set to monitor health by monitoring port HTTP 80 at a particular path of the app running on your workloads.</p>
<img src="/images/15845896975e72eb811e4d8.png" alt="Monitoring Health">
<p>Various advanced settings can be set up on the next screen. If you are running Linux VMs in your VMSS then you can leverage &quot;Cloud init&quot; to automate the configurations on the start-up of every new instance in VMSS. Proximity placement groups ensure that the different instances in your VMSS are deployed closer to each other physically in a datacenter. </p>
<p>You can also decide between the VM generation etc. </p>
<img src="/images/15845897035e72eb874aa00.png" alt="Advanced settings">
<p>You can provide Tags next to better categorize the resources. </p>
<img src="/images/15845897085e72eb8ccda8e.png" alt="Tags">
<p>Finally, you review all the settings and hit Create to create the VMSS.</p>
<img src="/images/15845897145e72eb9290347.png" alt="Review and Create">]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-auto-scaling-azure-01-creating-vmss-part-2</link>
<pubDate>Fri, 12 Jul 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Auto Scaling - Azure - 01 Creating VMSS - Part 1</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>Microsoft Azure <strong>Virtual Machine Scale Sets</strong> or <strong>VMSS</strong> lets you deploy your workload into identical Virtual machines. These VMs are load-balanced internally and the load is also managed automatically for you. You can define various autoscaling schedules to scale your resources up and down based on the demand. E.g. if Average CPU percentage increases beyond a number then you can automate to increase the number of running instances i.e. scale up. Similarly, you can configure to reduce the instances i.e. scale down if the average CPU percentage goes down a certain number. This auto-scaling is provided out of the box with VMSS. </p>
<p>Let's start working with VMSS by creating one.</p>
<p>You will find VMSS under the <strong>Compute</strong> category.</p>
<img src="/images/15845631905e7283f6d0d93.png" alt="VMSS under All Services">
<p>When in the blade for VMSS, click on the &quot;+Add&quot; button</p>
<img src="/images/15845631995e7283ffe9bdf.png" alt="Adding a new VMSS">
<p>Provide the subscription and Resource Group just like for any other resource creation in Azure. Provide a name for the VMSS, a region where it will be deployed and Availability set if you want any.</p>
<img src="/images/15845659695e728ed10ab92.png" alt="Creating VMSS - Basics">
<p>Next, you provide the Instance details. This is very important. This is the underlying Image and it's compute size. This is the consumption that you will be having in your VMSS based on the number of instances running.</p>
<p>You also provide authentication information for connectivity.</p>
<img src="/images/15845659755e728ed77de6f.png" alt="Creating VMSS - More Basics">
<p>Next, we need to provide the disk details (just like for a single VM). Here you can decide the type of disk to use for OS disk. You can also create and attach data disks here.</p>
<img src="/images/15845659805e728edcda944.png" alt="Creating VMSS - Disk related options">
<p>Next up we have networking details. You provide information regarding which virtual network the underlying VMs will be connected to in this VMSS. You also provide the Network Interface that VMSS will be using for connectivity.</p>
<img src="/images/15845659885e728ee482698.png" alt="Creating VMSS - Networking details">
<p>In part 2 we will continue with the wizard and setting up of VMSS. </p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-auto-scaling-azure-01-creating-vmss-part-1</link>
<pubDate>Thu, 11 Jul 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Auto Scaling - Azure VMSS vs AWS Auto Scaling</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p><strong>Autoscaling</strong> is a key aspect of Infrastructure as a Service (IaaS) in the cloud. The demand for the service and therefore the load on your applications will vary based on various factors. It will vary as per:</p>
<ul>
<li>The time of the day</li>
<li>The day of the month</li>
<li>Any holidays</li>
<li>Any special events etc. </li>
</ul>
<p>Your application needs to scale out or scale in based on the dynamic demands. You can NOT find a perfect mix and pre-determine the exact number of instances (of Virtual machines or EC2) that you need to run. If you will define the lower number then the performance will suffer. On the other hand, if you overestimate then you will be looking at costly bills. You always want to optimize between <strong>Cost</strong> and <strong>Performance</strong>. </p>
<p>This is where Autoscaling comes into the picture. With this, you can have rules in place that will automatically scale out or scale in your number of instances running the application. </p>
<p>As a <strong>pre-requisite</strong>, you need to have the application which can be deployed on multiple instances behind a load-balancer. </p>
<p>Both Azure and AWS provide out of the box services for this functionality. In both platforms, you can define rules as to how the scaling should occur in an automated fashion. E.g. you can have the number of instances increase by 1 if the average CPU percentage usage increases more than 75%. And also another rule for decreasing the number of instances by 1 if the average CPU percentage usage reduces below 25%. </p>
<p><strong>Microsoft Azure</strong> has package everything into one service called <strong>Virtual Machine Scale Set</strong> or more popularly known as <strong>VMSS</strong>. Even the automation for autoscaling is also built directly into this service and is as easy as setting up a few configurations. </p>
<p><strong>Amazon's AWS</strong> also has this capability, although this is scattered over different services. These include:</p>
<ul>
<li>Launch Configurations - a pre-packaged set of configurations that define how to set up new EC2 instances</li>
<li>Launch Templates - newer way to do the same thing as Launch Configurations. These have more features like versioning, etc.</li>
<li>Auto Scaling Groups - a load-balanced group of EC2 instances</li>
<li>AWS Auto Scaling - a newer service providing the automation for scaling out and scaling in of EC2 instances</li>
</ul>
<p>We will be working on these services in the next few posts and taking a closer look at each of these.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-auto-scaling-azure-vmss-vs-aws-auto-scaling</link>
<pubDate>Wed, 10 Jul 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Batch Services - AWS - Working with AWS Batch Service</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<h3>AWS Batch Service</h3>
<p>AWS Batch Service has below main concepts:</p>
<ul>
<li><strong>Job</strong> - This is what is sent for execution to the Job Queue. This is the main work that you are trying to accomplish e.g. processing an image.</li>
<li><strong>Job Queue</strong> - When you submit a job, it is sent to a job queue. This has linked compute resources. <strong>AWS Batch Scheduler</strong> places the job on a compute resource within a compute resource environment</li>
<li><strong>Compute Environment</strong> - This is the logical grouping of compute resources i.e. EC2 instances that can receive jobs from a job queue</li>
</ul>
<p>AWS Batch automatically scales out your compute resources when there are jobs to work upon. It later scales it back when the job queue is empty.</p>
<h3>Working with the Service</h3>
<p>AWS Batch service can be found under all Services and then navigating to the Compute category. Click on <strong>Batch</strong> to launch the AWS Batch service.</p>
<img src="/images/15844277715e7072fb5c4a9.png" alt="AWS Batch Service">
<p>Here you are presented with the default &quot;<strong>Getting Started</strong>&quot; screen.  It gives you an overview of how the service works. </p>
<p>Clicking on the Getting Started button takes you to the wizard to set up the service.</p>
<img src="/images/15844277935e707311769a0.png" alt="Getting Started">
<p>First, you define the Job and its details. You start with where you want to run your job and defining a run time for the job.</p>
<img src="/images/15844278105e707322dfe0c.png" alt="Defining the Job and it's Runtime">
<p>Next, you provide the container properties for the job. The command you define is what is executed when this job is run. This is where you define what compute power will be required to run the job. E.g. 2 vCPUs and 2000 MiB of Memory. You also define the number of attempts to be made to run the job successfully. Job Execution timeout is another thing you define here in the number of seconds. If the job does not finish up in this time then it is stopped and moved to the Failed queue.</p>
<img src="/images/15844278255e70733101239.png" alt="Container Properties">
<p>You can optionally define parameters and environment variables as well. </p>
<img src="/images/15844278375e70733d56135.png" alt="Job Parameters and Environment Variables">
<p>Under <strong>Step 2</strong>, you define the compute environment and set up the Job queue. Compute environment is where the job is run. You define its behavior in this section. You can choose between:</p>
<ul>
<li><strong>On-Demand</strong> - for any critical and business impacting jobs. This is most predictable</li>
<li><strong>Spot</strong> - very cost-effective option. The cost savings are up to 90% of the On-demand option. It uses the EC2 Spot instances to do so.</li>
</ul>
<p>You can also define a minimum, desired and a maximum number of vCPUs here.</p>
<img src="/images/15844278495e70734941b33.png" alt="Compute Environment">
<p>You set up your networking next. Also, this is where you define the Job Queue and link it to the Compute environment. </p>
<img src="/images/15844278605e707354139bf.png" alt="Networking for Compute Environment and Job Queue">
<p>Hit <strong>Create</strong> and you should be all set. AWS creates the underlying resources and shows you the status. </p>
<h3>Viewing the Resources and Logs</h3>
<p>You can now navigate to the Dashboard and view the newly created resources.</p>
<img src="/images/15844692465e7114fe97480.png" alt="AWS Batch Dashboard"> 
<p>You can navigate to the Jobs section and find your job that was executed. It will look like a guid. Click on this job. In the popup, click on &quot;<strong>View logs</strong>&quot;. This will take you to the CloudWatch window and will show you the logs.</p>
<img src="/images/15844695395e71162355651.png" alt="Log for the Jobs"> 
<p>That's all there is to it. You can define simple jobs like execute some applications to very complex jobs like image and video processing.</p>
<p>For more information check these link: </p>
<ul>
<li><a href="https://aws.amazon.com/batch/" target="_blank">AWS Batch service</a></li>
</ul>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-batch-services-aws-working-with-aws-batch-service</link>
<pubDate>Mon, 08 Jul 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Batch Services - Azure - 4 Creating Tasks in a Batch Job</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>So far we have seen that to consume Azure Batch service we need a <strong>Batch account</strong>. Within this Batch account, we create a <strong>Pool</strong>. Next, we create a <strong>Job</strong> that runs on this Pool. Finally, every Job has multiple <strong>Tasks</strong> that are actually executed within the Job (and on the Pool)</p>
<p>In this post, we will create on such Task in a Job. </p>
<p>To create a Task, you begin by navigating to your Batch account (if not there already). Next click on the Jobs option under the Features category. Find your job and click on the same.</p>
<img src="/images/15843974675e6ffc9b9660f.png" alt="Navigating to Jobs">
<p>Within the job, find the Tasks menu and click on that. You will see &quot;+Add&quot; option here to add a new Task.</p>
<img src="/images/15843974735e6ffca175d8f.png" alt="Tasks section">
<p>Next, a wizard will open up to add a new Task. Here primarily you provide a task Id, display name and Command line. Note, that this command line is where you specify what to run. </p>
<p>A simple command line may look like below. This displays the Batch environment variable and then waits for 180 seconds. This is used in the screenshots and later output.</p>
<p><code>cmd /c "set AZ_BATCH &amp; timeout /t 180 &gt; NUL"</code></p>
<p>A more complex command line, to execute an app package may look like:</p>
<p><code>cmd /c %AZ_BATCH_APP_PACKAGE_app01#1.0%\\myapp.exe</code></p>
<img src="/images/15843974795e6ffca706a55.png" alt="Add Task wizard">
<p>You can configure more settings like environment settings, task dependencies, and Application packages etc. Hit Submit when ready to create the Task.</p>
<img src="/images/15843975025e6ffcbe8536b.png" alt="More settings">
<p>To check the output of the task execution we will check the output stream. Navigate to your task. And then under the &quot;<strong>Task Files</strong>&quot; section click on the &quot;<strong>Files on node</strong>&quot; option. Here you will find the &quot;<strong>stdout.txt</strong>&quot; file. Click on this file to view the output.</p>
<img src="/images/15843975075e6ffcc3b3739.png" alt="Checking the output">
<p>The output stream (for the simple command line from above) will look like below. You can download this for offline usage if you require it.</p>
<img src="/images/15843975135e6ffcc9bc468.png" alt="Checking the Task output stream">
<p>This concludes a brief overview of how to consume Azure Batch service.</p>
<p>For more information check this link: <a href="https://azure.microsoft.com/en-us/services/batch/#overview" target="_blank">Azure Batch service</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-batch-services-azure-4-creating-tasks-in-a-batch-job</link>
<pubDate>Fri, 05 Jul 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure App Service now has a free Linux tier and supports Python and Java</title>
<description><![CDATA[<p>Azure App Service has expanded its capabilities to include Linux as it's hosting Operating System. With the goodness of Linux also comes the support for Python and Java natively. </p>
<p>Now when you will create a new App Service in Microsoft Azure, you can view different Python versions that are currently supported.</p>
<img src="/images/15851055505e7aca8e0e0a4.png" alt="Python support in App Service">
<p>You can also view various supported Java versions.</p>
<img src="/images/15851059115e7acbf74afb6.png" alt="Java Support">
<p>Under the Operating System settings you can see the Linux as an option. Note that based on the Runtime stack selected, this option maybe pre-selected for you and you won't be able to change this setting. If you will select a Runtime Stack that is supported on both OS then you will have the option to switch between the two platforms. E.g. for .Net core you can select any one of the two platforms.</p>
<img src="/images/15851059215e7acc01c3f2f.png" alt="Linux Tier">
<p>For more information please check this link: <a href="https://azure.microsoft.com/en-us/blog/azure-app-service-update-free-linux-tier-python-and-java-support-and-more/" target="_blank"></a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-app-service-now-has-a-free-linux-tier-and-supports-python-and-java</link>
<pubDate>Sun, 30 Jun 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Batch Services - Azure - 3 Creating Job in the Batch Pool</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>Once you have a Batch account set up with a Pool (of compute nodes) running, you will need a job running in this pool to consume all that computing power. </p>
<p>To create a Job in the Batch Pool, you first navigate to your Batch account (if not there already). Then scroll down and select <strong>Jobs</strong>. Click on &quot;<strong>+Add</strong>&quot; to add new jobs.</p>
<img src="/images/15843967305e6ff9ba4b7bb.png" alt="Jobs section">
<p>In the blade to Add Job, provide a unique Job Id. Add this job to the appropriate pool by clicking on the &quot;<em>Select a pool to run the job on</em>&quot;. This will bring up another blade where all your pools will be listed. Selected the one you want and hit <strong>Select</strong> button.</p>
<img src="/images/15843967365e6ff9c0b0724.png" alt="Adding a New Job">
<p>Now that the job is also ready, you can start adding Tasks to this job. You can add multiple tasks to one job. We will check that next. </p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-batch-services-azure-3-creating-job-in-the-batch-pool</link>
<pubDate>Mon, 24 Jun 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Batch Services - Azure - 2 Creating Batch Pool in an Account</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>Once we have a Batch account, we will need a pool of compute nodes. You can do that by creating a Pool in the Batch account. </p>
<p>First, we will need to navigate to the Batch account. To do this first go to <strong>All Services</strong> in the Azure portal menu. </p>
<img src="/images/15843956425e6ff57adedc0.png" alt="All Services">
<p>Next, find <strong>Batch accounts</strong> under the Compute category.</p>
<img src="/images/15843956515e6ff58397575.png" alt="Batch accounts">
<p>Under Batch accounts, all your accounts will be listed. Click on the account in which you want to create the pool. Under the account settings, scroll down to the <strong>Features</strong> category and click on <strong>Pools</strong>. You will not have any pools. Click on &quot;+Add&quot; to create a new one.</p>
<img src="/images/15843956625e6ff58e9db8f.png" alt="Option to Add Pool">
<p>&quot;<strong>Add Pool</strong>&quot; blade will open up. Provide a pool ID and an optional display name. Provide the Operating system details that you want to target. You can use custom images or one of the marketplace image. E.g. in the screenshot below, the marketplace image for windows server 2016 Datacenter with small disk is selected.</p>
<img src="/images/15843956685e6ff594b25aa.png" alt="Pool details and OS settings">
<p>You can provide the node size for each compute node in the pool. You will be charged based on the no. of nodes and the size of each node. You can provide Scale settings. It can be pre-configured as shown below or can be set to auto-scale. </p>
<img src="/images/15843957245e6ff5cc6fc98.png" alt="Node size and scaling options">
<p>You can next configure a task to start on setting up the pool. You also have various other settings to customize the pool setup. </p>
<img src="/images/15843957335e6ff5d50a9f0.png" alt="Task and other optional settings">
<p>If you scroll down further, you will see the options to configure virtual network connectivity to the pool. E.g. if you want to run an app that has some network dependency, that you can also run that by configuring your pool to connect to your virtual network.  </p>
<p>If you already own Windows Server licenses then you can configure Hybrid Use Benefit or HUB licensing. Note that this option will only show up for Windows server related compute nodes.</p>
<img src="/images/15843957385e6ff5dacde39.png" alt="Virtual Network and HUB license settings">
<p>Click Ok to create the pool. </p>
<p>In the next post, we will create a Job in this pool. </p>
<p>For more information check this link: <a href="https://azure.microsoft.com/en-us/services/batch/#overview" target="_blank">Azure Batch service</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-batch-services-azure-2-creating-batch-pool-in-an-account</link>
<pubDate>Fri, 21 Jun 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Batch Services - Azure - 1 Creating Batch account</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>To consume the Azure Batch service, you need a <strong>Batch account</strong>. </p>
<p>To create a Batch account navigate to Create a new service (from the 3 lines on the top left corner), then in the menu under the Compute category, find the &quot;<strong>Batch Services</strong>&quot; option.</p>
<img src="/images/15843943555e6ff0737c790.png" alt="Creating Batch Service">
<p>Just like any other resource, you provide a Subscription and a Resource Group. You also provide a name for the account. This name should only contain lowercase letters and should be unique in Azure's DNS space. This is used to uniquely identify your Batch account. You also select the location for your Batch service account.</p>
<img src="/images/15843943635e6ff07b6da9f.png" alt="New Batch account wizard">
<p>Next, you have advanced settings. Here you select the Pool allocation mode. You have a Batch Service or User subscription as two options. This setting specifies whether to provision compute node pools in a subscription managed by the Batch service, or in the subscription in which you are creating the new Batch account. In most cases, this should be set to default i.e. managed by Batch service.</p>
<img src="/images/15843943555e6ff0737c790.png" alt="Advanced options">
<p>Next, you can apply tags to categorize the resource and get better control over automation, reporting, and billing, etc. </p>
<img src="/images/15843944135e6ff0ad21a6f.png" alt="Applying Tags">
<p>Finally, you review the setting and create the Batch account. Note that you have the option to download the template here so that you can automate the deployment next time you need to deploy this or deploy it uniformly across different environments. </p>
<img src="/images/15843944265e6ff0ba4b4a2.png" alt="Review and Create">
<p>Azure immediately shows you the status of your deployment. This screen refreshes automatically and will update to show when the deployment is complete.</p>
<img src="/images/15843944415e6ff0c9c13df.png" alt="Deployment Status">
<p>In the next post, we will look at creating a <strong>Pool</strong> within the Batch account.</p>
<p>For more information check this link: <a href="https://azure.microsoft.com/en-us/services/batch/#overview" target="_blank">Azure Batch service</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-batch-services-azure-1-creating-batch-account</link>
<pubDate>Thu, 20 Jun 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Batch Services - Azure vs AWS</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>Both Microsoft Azure and Amazon Web Services provide Batch service. These services are very similar in nature as they are trying to solve the same issue. Both are trying to run large-scale parallel and high-performance computing applications efficiently in the cloud.</p>
<p>Both the services leverage underlying computing infrastructure to execute the Jobs. Both have different mechanisms to save you cost. Where Microsoft leverages Hybrid Use Benefit (or HUB) licensing to reduce the cost of having Windows Server, AWS has integrated the usage of Spot instances directly to the AWS Batch service to reduce the cost of the underlying compute.</p>
<p>We will explore each platform's service in detail in the upcoming posts.</p>
<p>For more information check these link: </p>
<ul>
<li><a href="https://azure.microsoft.com/en-us/services/batch/#overview" target="_blank">Azure Batch service</a></li>
<li><a href="https://aws.amazon.com/batch/" target="_blank">AWS Batch service</a></li>
</ul>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-batch-services-azure-vs-aws</link>
<pubDate>Wed, 19 Jun 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Virtual Machines vs EC2 instances - 11 - Protecting Azure VMs</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>For your mission-critical workloads, you can configure active replication of the VM to secondary geolocation. You can do so right from the settings of the VM. Under the hood it uses <strong>Azure Site Recovery</strong> to set up this disaster recovery.</p>
<p>Navigate to your VM and under the Operations category, click on the &quot;<strong>Disaster recovery</strong>&quot;. Here you can select the Target region for the replication. This region becomes your secondary location and should be factored in your Disaster Recovery strategy. E.g. There should be appropriate infrastructure in place to allow for connectivity to this secondary location. </p>
<img src="/images/15843419925e6f23e8d325d.png" alt="Disaster Recovery for a VM">
<p>Next, you have Advanced Settings for the Disaster Recovery. You can select the settings for the <strong>Target</strong>, i.e. what the Resource Group, Virtual Network etc. is going to be in the target region, if we failover the VM to that region. You also have <strong>Storage</strong> settings, where you define the cache storage account (used to replicate the managed disks) and the source to target managed disk mappings. </p>
<p>Replication and Extension settings can also be updated here if you require. I recommend that under Replication settings you ensure that the vault selected is appropriate one if you have more than one vaults in your environment. </p>
<img src="/images/15843419995e6f23ef8d7d2.png" alt="DR advanced settings">
<p>Finally, review all the settings and start the replication by click on the button as shown below. It triggers an initial replication. </p>
<img src="/images/15843420065e6f23f61141f.png" alt="Review and start replication">
<p>You always have an active copy of the VM in the secondary location (delay of up to 15 mins, depending on your replication policy). You can trigger Failover if anything goes wrong in the primary region. Once the DR even is over, you can also Failback to the original location.</p>
<p>For more information check this link: <a href="https://docs.microsoft.com/en-us/azure/site-recovery/azure-to-azure-quickstart" target="_blank">Set up disaster recovery to a secondary Azure region for an Azure VM</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-virtual-machines-vs-ec2-instances-11-protecting-azure-vms</link>
<pubDate>Sun, 16 Jun 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Virtual Machines vs EC2 instances - 10 - Backing up Azure VMs</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>Backing up a VM is the best practice from an operations perspective. The VM should be backed up on periodic intervals and these backups should be retained for some time as well.</p>
<p>You can backup a VM, right from its settings. After you navigate to Azure Virtual Machines, click on the VM for which you want to configure backup. Then under the Operations category, click on the <strong>Backup</strong> option. Here to configure the backup, you select the Backup vault, Resource Group and the backup policy. </p>
<p>You can also create a new backup policy on the fly. To do so, you need to click on the &quot;Create (or edit) a new policy&quot; link.</p>
<img src="/images/15843409935e6f2001ca1a5.png" alt="Azure Backup from VM settings">
<p>Under the create or edit Backup policy, you can configure how often and when the backup should be taken. You also configure for how long the backup should be retained. </p>
<img src="/images/15843411115e6f207736d06.png" alt="Creating backup policy">
<p>For more information check this link: <a href="https://docs.microsoft.com/en-in/azure/backup/backup-azure-vms-first-look-arm" target="_blank">Back up an Azure VM from the VM settings</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-virtual-machines-vs-ec2-instances-10-backing-up-azure-vms</link>
<pubDate>Wed, 12 Jun 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Virtual Machines vs EC2 instances - 09 - Backing up EC2 instances Part 3</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>In this post, we will see how to create an <strong>on-demand backup</strong>. This type of backup is typically needed when you want to perform a critical operation and this can impact the functionality of the instance. Therefore, prior to performing such an operation, you want to ensure that you have the latest backup of the instance. </p>
<p>You access the option to create the on-demand backup by navigating to the AWS Backup service and then selecting the &quot;<strong>Protected resources</strong>&quot;. You will see the button for creating the on-demand backup here.</p>
<img src="/images/15843382255e6f15318fc95.png" alt="">
<p>In the wizard to create the on-demand backup, you provide the resource for which you want to create the on-demand backup. You can trigger the backup rightaway or delay the start by specifying a backup window. You can also set the expiration of the backup under Lifecycle. You can select an existing vault or create a new one. </p>
<img src="/images/15843382305e6f15364af96.png" alt="">
<p>Finally, you select the IAM role that backup service will use to impersonate while creating the recovery point. Optionally you can also create and apply Tags. </p>
<img src="/images/15843382345e6f153ab16b9.png" alt="">
<p>For more information please check this link:
<a href="https://aws.amazon.com/backup/?nc2=h_ql_prod_st_bu" target="_blank">AWS Backup</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-virtual-machines-vs-ec2-instances-09-backing-up-ec2-instances-part-3</link>
<pubDate>Sun, 09 Jun 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Virtual Machines vs EC2 instances - 09 - Backing up EC2 instances Part 2</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>In the previous post, we configured the backup plans in <strong>AWS Backup</strong>. Now we will use those backup plans and assign some resources to that plan. Specifically, we will assign the EC2 instance to configure the backup for that instance</p>
<h3>Assigning Resources</h3>
<p>Once the Backup Plan has been created or there is one already in the system, then when you navigate to AWS Backup service, your view will be similar to below. From the left menu, click on Backup Plans (if not already selected). Click on &quot;<strong>Assign Resources</strong>&quot; button.</p>
<img src="/images/15843325645e6eff1405a8e.png" alt="Assigning Resources">
<p>In the <strong>Assign Resources</strong> wizard, provide a name to the resource assignment. Also, provide an IAM role that the backup service will use to create the recovery points. Next, you select the resources you want to assign. Below is the combination of selections to select the EC2 instance. </p>
<img src="/images/15843325825e6eff2698333.png" alt="Assigning by Resource ID">
<p>You have two options for assigning resources. The first is to assign multiple resources by <strong>Tags</strong>. Second is to provide a <strong>Resource Id</strong> (which is unique for every resource).</p>
<img src="/images/15843358635e6f0bf7c57d9.png" alt="Options for Assigning Resources">
<p>You have different types of resources that you can backup using AWS Backup service. These include:</p>
<ol>
<li>EC2 instances</li>
<li>Elastic Block Store or EBS</li>
<li>Relational Database Service or RDS </li>
<li>Elastic File System or EFS</li>
<li>Storage Gateway</li>
<li>Dynamo DB</li>
</ol>
<img src="/images/15843358705e6f0bfe20045.png" alt="Resource Types">
<p>Once you click on Assign resources, the ec2 instance will be assigned to the backup plan. It will not show up under the protected resources until the next schedule of the backup plan creates a recovery point for the resource. </p>
<p>For more information please check this link:
<a href="https://aws.amazon.com/backup/?nc2=h_ql_prod_st_bu" target="_blank">AWS Backup</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-virtual-machines-vs-ec2-instances-09-backing-up-ec2-instances-part-2</link>
<pubDate>Wed, 05 Jun 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Virtual Machines vs EC2 instances - 09 - Backing up EC2 instances Part 1</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>To backup EC2 instances, you use the <strong>AWS Backup</strong> service. This is a fully managed, centralized backup service simplifying the management of your backups for your Amazon Elastic Block Store (EBS) volumes, your databases (Amazon Relational Database Service (RDS) or Amazon DynamoDB), AWS Storage Gateway and your Amazon Elastic File System (EFS) filesystems. </p>
<h3>Configuring Backup Plans</h3>
<p>To be able to use AWS Backup you need Backup Plans configured and created. </p>
<p>Let's first navigate to the service by going to <strong>Services</strong> and then &quot;<strong>AWS Backup</strong>&quot; service under the Storage category.</p>
<img src="/images/15843315955e6efb4bd033b.png" alt="AWS Backup service navigation">
<p>When you launch this service for the very first time, you are provided with loads of information regarding the service. The first thing that you need to do is to create a <strong>Backup Plan</strong>. Click on the button to &quot;<strong><em>Create Backup plan</em></strong>&quot;.</p>
<img src="/images/15843316025e6efb524ea63.png" alt="Creating Backup plan">
<p>You can either start from an existing plan (to copy all it's settings) or build a new plan. If you don't have anything configured then you will have to select the option to Build a new plan. Provide a name for the plan and scroll down.</p>
<img src="/images/15843316225e6efb66b1ab8.png" alt="Building a new plan">
<p>Next, provide the Backup rule configurations. With every backup plan, you can have multiple backup rules. Basically, these define when a backup is taken, how often it is taken, when does this backup expires and so on. You start by specifying a name for the rule. Then you specify a Schedule for the backup. The options include Daily, Weekly, Monthly etc. You can also customize the backup window, i.e. at what hour of the day the backup is taken.</p>
<p>Under <strong>Lifecycle</strong>, you define when the backup should transition to Cold storage and when it should expire. You also select a Vault for the backup. This is the vault resource where the recovery points created by the Backup service are organized. </p>
<img src="/images/15843316275e6efb6be8850.png" alt="Backup rule configuration">
<p>Next, you can copy the backup to multiple regions. In this way you get redundancy. If the backup primary region goes down, you can use the copy from a different region to restore.</p>
<p>Here you also specify the Tags to add to a backup plan to organize and categorize this in a better way. </p>
<img src="/images/15843316355e6efb731b88c.png" alt="Copy to region and tags settings">
<p>Click on the &quot;<strong>Create plan</strong>&quot; button once done. As you can see from the two notes above, once the plan is created, you can assign resources to this backup plan. We want to add our EC2 instance to this plan. We will do this in the next blog post. </p>
<p>For more information please check this link:
<a href="https://aws.amazon.com/backup/?nc2=h_ql_prod_st_bu" target="_blank">AWS Backup</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-virtual-machines-vs-ec2-instances-09-backing-up-ec2-instances-part-1</link>
<pubDate>Tue, 04 Jun 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Virtual Machines vs EC2 instances - 08 - Azure VM Insights</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p><strong>Azure Monitor for VMs</strong> or <strong>Insights</strong> is the enhanced version of the analytics on the VM. It includes the performance and dependencies related data for the virtual machines.</p>
<p>This is very similar to &quot;<strong>Enable Detailed Monitoring</strong>&quot; using <strong>CloudWatch</strong> in AWS.</p>
<h2>Setting Up</h2>
<p>To set this up navigate to your virtual machine and then click on the <strong>Insights</strong> section under the <strong>Monitoring</strong> category. Click on the Enable button to start. Note that the VM should be in the running state for you to be able to enable this setting.</p>
<img src="/images/15843246435e6ee023c978d.png" alt="Enabling Insights">
<p>If you get any error while clicking on the Enable button, ensure that the VM is running. If it is already running then wait and retry after few minutes. Sometimes the VM bootup takes time to finish (depending on the Compute capacity of the VM).</p>
<p>For the VM Insights to work, it needs to connect to a &quot;<strong>Log Analytics Workspace</strong>&quot;. When you will click Enable button, it will ask to connect to a workspace. When you click Enable, a Microsoft Monitoring Agent (MMA) extension is installed on the VM and is connected to the workspace selected.</p>
<img src="/images/15843246505e6ee02a4df1a.png" alt="Configuring Workspace for Insights">
<h2>Analytics Data</h2>
<p>Once successfully enabled and the extension deployed, the Insights data will update to show actual data from the VM. If you don't see it or can't refresh the view, then navigate to any other section and come back to this section to force the refresh.</p>
<p>The first tab is for the <strong>Performance</strong> data. This shows you interactive and filterable data. Some of the data includes Logical Disk Performance, CPU Utilization, Available Memory, Logical Disk analytics, Network data etc. </p>
<img src="/images/15843250275e6ee1a30f550.png" alt="Performance Data">
<p>Next tab is for Map or Service Map. This shows you the dependencies data. This shows you a graphical and interactive view of how the VM interacts with other VMs and outside the network, what ports it is communicating on, etc.</p>
<img src="/images/15843250335e6ee1a9cf2a0.png" alt="Service Map">
<p>For more information check this link:</p>
<p><a href="https://docs.microsoft.com/en-us/azure/azure-monitor/insights/vminsights-overview" target="_blank">Azure Monitor for VMs</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-virtual-machines-vs-ec2-instances-08-azure-vm-insights</link>
<pubDate>Fri, 31 May 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure App Configuration Service</title>
<description><![CDATA[<p>Azure App Configuration Service is a new service that let's application developers save their application related configurations separately in this service. These configurations are saved securely in this service. This service also allows for your application to scale without any dependency on the configuration data. </p>
<p>As the configurations are stored separately from the application code, the developers and designers now have the flexibility to modify the app behavior without them requiring to alter the code multiple time and performing multiple deployments of the application.</p>
<p>When you navigate to the new service creation wizard in Microsoft Azure, search for App Configurations. This should bring up this new service as shown below.</p>
<img src="/images/15851052455e7ac95d09346.png" alt="App Configurations">
<p>For more information please check this link: <a href="https://docs.microsoft.com/en-us/azure/azure-app-configuration/overview" target="_blank">Azure App Configuration</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-app-configuration-service</link>
<pubDate>Thu, 30 May 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Virtual Machines vs EC2 instances - 07 - Boot Diagnostics of Azure VMs</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>Azure VM <strong>Boot diagnostics</strong> lets you diagnose what is happening under the hood during the boot-up. If you face any issues with the boot-up of the VM then this is where you will go to find the diagnostics data. </p>
<h3>Where is this setting</h3>
<p>You can find this setting when you navigate to the Azure Virtual Machines section and click on your VM. Under the settings menu, scroll all the way down to the &quot;Support + troubleshooting&quot; category. You will find the Boot diagnostics option here.</p>
<img src="/images/15843185965e6ec88424baa.png" alt="Boot Diagnostics setting">
<h3>Setting up Boot Diagnostics</h3>
<p>If this is not set up for your VM then you can do so easily. All you need is a storage account. You can also change this anytime. Click on the settings button at the top. Click <strong>On</strong> and select the storage account. You can also create a new one if you require it.</p>
<img src="/images/15843192145e6ecaee02936.png" alt="Configuring Boot Diagnostics">
<h3>Checking the Boot Diagnostics data</h3>
<p>You can view the screenshot to check where the VM is at with the boot-up process under the hood. If you are not able to log into the VM then this is the main area you want to check to see if the VM is ready (or if there are any updates being applied to the VM). </p>
<img src="/images/15843195455e6ecc39b0efa.png" alt="Screenshots">
<p>You can download the Serial logs data for the VM as well if you need to do advanced troubleshooting. This is especially helpful in case of the Linux VMs.</p>
<img src="/images/15843195525e6ecc401b32d.png" alt="Serial log">
<p>For more information click here:
<a href="https://docs.microsoft.com/en-us/azure/virtual-machines/troubleshooting/boot-diagnostics" target="_blank">How to use boot diagnostics to troubleshoot virtual machines in Azure</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-virtual-machines-vs-ec2-instances-07-boot-diagnostics-of-azure-vms</link>
<pubDate>Wed, 29 May 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Virtual Machines vs EC2 instances - 06 - Monitoring of Azure VMs</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>Within Azure, when you click on the Virtual Machine, you have a lot of control of various aspects of the VM right from one screen. When you navigate to the Azure Virtual Machines section, you are presented with all the virtual machines in your environment. Here when you click on any one VM, a new screen pops up. Microsoft calls this as new <strong>Blade</strong>. In this blade, the Overview section is the first one and is selected by default. It shows you lots of information for the VM like it's a subscription, resource group, public and private IP address, a button to connect to the VM, etc. </p>
<p>When you scroll a little bit down you get the out of the box metrics for the VM. These metrics include CPU data, Network data, Disk IOPS, etc. You also have time filter at the top.</p>
<img src="/images/15843136965e6eb56023024.png" alt="Azure VM Overview">
<p>From the left-hand side menu of the selected Virtual Machine, scroll down to the Monitoring section. This has loads of information and options. The key option you want to explore is the &quot;<strong>Metrics</strong>&quot; option. Here you can build custom graphs to monitor different metrics by adding each one. Click on Add Metrics and select the same from the drop-down list. The graph will update to show all the selected matrices.</p>
<img src="/images/15843137025e6eb566f2239.png" alt="Azure VM Metrics">
<p>We will explore more options in the subsequent posts as well.</p>
<p>For more information, refer to the below links:
<a href="https://azure.microsoft.com/en-us/services/virtual-machines/" target="_blank">Microsoft Azure Virtual Machines</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-virtual-machines-vs-ec2-instances-06-monitoring-of-azure-vms</link>
<pubDate>Sat, 25 May 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>New Tiers in Azure Application Gateway - Standard v2 and WAF v2 SKUs</title>
<description><![CDATA[<p>New tiers have been made generally available since last week for Application Gateway. These tiers are:</p>
<ul>
<li>Standard v2</li>
<li>WAF v2</li>
</ul>
<p>These tiers are backed by Microsoft's SLA of 99.95% availability. These tiers have various optimizations in terms of Autoscaling, Zone redundancy, faster provisioning, improved performance, etc.</p>
<p>You can see these new tiers when creating a new Application Gateway. For the <strong>Tier</strong> settings, in the drop-down, now you will see these two new SKUs also listed and available as well.</p>
<img src="/images/15851034525e7ac25c699cb.png" alt="New Tiers in Application Gateway">
<p>For more information please check this link: <a href="https://azure.microsoft.com/en-us/services/application-gateway/" target="_blank">Application Gateway</a></p>]]></description>
<link>https://HarvestingClouds.com/post/new-tiers-in-azure-application-gateway-standard-v2-and-waf-v2-skus</link>
<pubDate>Fri, 17 May 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Virtual Machines vs EC2 instances - 05 - Monitoring of EC2 instances</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>Monitoring is centralized in AWS via <strong>Cloud Watch</strong>. Few out of the box metrics are integrated into the EC2 instances screen. Various in-depth and detailed metrics are available in the Cloud Watch product. </p>
<p>To view the monitoring data, navigate to the EC2 section and then go to the Instances section. Click on the EC2 instance for which you want to view the metrics. Then in the below section, you are showed the data for the instance. Here select the <strong>Monitoring</strong> tab. You have different settings here that can help you with monitoring your EC2 instance. The key settings include:</p>
<ul>
<li><strong>CloudWatch alarms</strong> - this is a way to set up the alarms or alerts</li>
<li><strong>Enable Detailed Monitoring</strong> - this is to enable advanced diagnostics data capturing on the instance. With this, you get detailed monitoring of various metrics that you can also control</li>
<li><strong>View all CloudWatch metrics</strong> - when you click on this option, it will take you to CloudWatch resource. There you can pick and choose various metrics for your EC2 instance and visualize the same.</li>
<li>The <strong>bottom pane</strong> is where by default the key metrics for the instance are displayed. These metrics include CPU Utilization, Disk Read and Writes, Network analytics etc.</li>
</ul>
<img src="/images/15843122925e6eafe48349d.png" alt="AWS EC2 Instance monitoring">
<p>For more information, refer to the below links:
<a href="https://azure.microsoft.com/en-us/services/virtual-machines/" target="_blank">Microsoft Azure Virtual Machines</a></p>
<p><a href="https://aws.amazon.com/ec2/" target="_blank">Amazon Elastic Compute Cloud (EC2)</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-virtual-machines-vs-ec2-instances-05-monitoring-of-ec2-instances</link>
<pubDate>Wed, 15 May 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure now supports GitHub identity single sign-on</title>
<description><![CDATA[<p>Azure now supports <strong>GitHub identity single sign-on</strong>. Now when you will try to log into the Azure portal you can use GitHub as an identity provider. If you already have a GitHub account then you don't need to create a new account. You can use the same account to log into the Azure portal (provided someone has given you access on some subscription).</p>
<p>When you click on this link, it takes you to the GitHub website to authenticate you and then bring you back to the Azure portal once authenticated. </p>
<img src="/images/15851077475e7ad323e447d.png" alt="GitHub as a login option">
<p>For more information please check this link: <a href="https://azure.microsoft.com/en-ca/updates/azure-now-supports-github-identity-single-sign-on-2/" target="_blank">Azure now supports GitHub identity single sign-on</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-now-supports-github-identity-single-sign-on</link>
<pubDate>Tue, 14 May 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Virtual Machines vs EC2 instances - 04 - Differences in IOPS</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>Deciding the <strong>IOPS</strong> (Input/Output Operations Per Second, pronounced i-ops) is one of the most critical decisions when building your virtual machines. But this is also one of the most ignored aspects of the deployment. This performance measurement of your OS and Data Disks can alter your experience significantly. Let's see how it differs in the two platforms.</p>
<h2>In AWS</h2>
<ul>
<li>For <strong>Provisioned IOPS (SSD)</strong> volumes, you can provision up to 50 IOPS per GiB. Provisioned IOPs (SSD) volumes can deliver up to 64000 IOPS and are best for EBS-optimized instances.  </li>
<li>For <strong>General Purpose (SSD)</strong> volumes, baseline performance is 3 IOPS per GiB, with a minimum of 100 IOPS and a maximum of 16000 IOPS. General Purpose (SSD) volumes
under 1000 GiB can burst up to 3000 IOPS. </li>
<li><strong>Magnetic volumes</strong>, previously called standard volumes, deliver 100 IOPS on average and
can burst into hundreds of IOPS.</li>
</ul>
<p>If you remember when we were building the EC2 instances, you have the option to add/tweak the storage in Step 4. Here you can see that you can select between the different type of Storage disks that we discussed above.</p>
<img src="/images/15843064775e6e992deb746.png" alt="AWS1 - Adding Storage">
<p>When you select the &quot;<em>Provisioned IOPS SSD (io1)</em>&quot; option, the column for IOPS turns into a text box and you can tweak the IOPS to your hearts' content. Please note that you can not go beyond 50 IOPS per GiB. E.g. If the disk you selected is 8 GiB in Size then you can't go beyond a maximum of 400 IOPS (i.e. 8x50). </p>
<img src="/images/15843064855e6e9935a9a3a.png" alt="AWS2 - Provisioned IOPS">
<h2>In Azure</h2>
<ul>
<li><strong>Premium SSD disks</strong> offer high-performance, low-latency disk support for I/O-intensive applications and production workloads. </li>
<li><strong>Standard SSD Disks</strong> is a cost-effective storage option optimized for workloads that need consistent performance at lower IOPS levels. </li>
<li>Use <strong>Standard HDD</strong> disks for Dev/Test scenarios and less critical workloads at the lowest cost.</li>
</ul>
<p>You can directly update the type of disk for OS under the second screen of the Virtual Machine creation wizard i.e. under the Disks category. You can select between the three types. The OS Disk size is determined by the VM Size selected. This can be increased later. You can click on &quot;Create and attach a new disk&quot; link and add a Data Disk to the VM.</p>
<img src="/images/15843065145e6e99523d907.png" alt="Az1 Storage Types">
<p>In the &quot;Create a new disk&quot; wizard, you can change the size of the disk. Now here you are shown based on the size, how much IOPS you will get. Microsoft defines different tiers and provides IOPS based on the tier selected i.e. based on the disk size selected. </p>
<p>Below you see the IOPS for <strong>Premium SSD</strong> tiers.</p>
<img src="/images/15843065925e6e99a0ab239.png" alt="Az2 ">
<p>Below you see the IOPS for <strong>Standard SSD</strong> tiers.</p>
<img src="/images/15843066025e6e99aa74b35.png" alt="Az3 ">
<p>Below you see the IOPS for <strong>Standard HDD</strong> tiers.</p>
<img src="/images/15843066145e6e99b6ba2ee.png" alt="Az4 ">
<p>For more information, refer to the below links:
<a href="https://docs.microsoft.com/en-us/azure/virtual-machines/linux/managed-disks-overview" target="_blank">Microsoft Azure managed disks</a></p>
<p><a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html" target="_blank">Amazon EBS Volume Types</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-virtual-machines-vs-ec2-instances-04-differences-in-iops</link>
<pubDate>Thu, 09 May 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>PowerShell support in Azure Functions</title>
<description><![CDATA[<p>PowerShell support is now available in Azure Functions. This is the feature that I have been waiting for and many others have as well. If you have any reusable functions in your PowerShell scripts, now you can run those in the Serverless way via Azure Functions. Microsoft announced that this includes native support for PowerShell Core 6 as well as the Azure Az modules.</p>
<p>To get started, create a new Function App in Azure. While you are creating a new one, now for the &quot;<strong>Runtime stack</strong>&quot; you can select &quot;<strong>PowerShell Core</strong>&quot; as the option. </p>
<img src="/images/15851027185e7abf7e5de6c.png" alt="Function App creation">
<p>For more information please check this link: <a href="https://azure.microsoft.com/en-us/blog/serverless-automation-using-powershell-preview-in-azure-functions/" target="_blank">Serverless automation using PowerShell preview in Azure Functions</a></p>]]></description>
<link>https://HarvestingClouds.com/post/powershell-support-in-azure-functions</link>
<pubDate>Mon, 06 May 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Virtual Machines vs EC2 instances - 03 - Creating Azure VMs</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>In the previous post, we covered how to create EC2 instances in AWS. You can revisit that post here: <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-virtual-machines-vs-ec2-instances-02-creating-ec2-instances/" target="_blank">Creating EC2 Instances in AWS</a></p>
<p>In this post, we will check how to create a Virtual Machine in Azure. If you know one, it is very easy to translate the information to another platform. The essential part is to look out for the quirks of each platform and note how these differ from each other. Let's dive in.</p>
<p>You can create the VM resource in one of the two ways. Either by navigating to &quot;<em>Create a resource</em>&quot; and then finding the VM you want to build. </p>
<img src="/images/15843034905e6e8d82e484b.png" alt="1 Create a resource">
<p>Or you can navigate to the VMs section and then click on the <strong>Add</strong> button to build the VM.</p>
<img src="/images/15843034975e6e8d89a4c41.png" alt="2 Add a VM">
<p>You are presented with the &quot;Create a virtual machine&quot; wizard. Here you ensure that the subscription and the resource group is correct. <strong>Subscription</strong> is how to buy and use Azure. You have different plans like &quot;Pay as you go&quot; when subscribing to Azure. <strong>Resource Groups</strong> are nothing but a logical grouping of resources in Azure. </p>
<p>You also provide the Instance details like the name of the VM, the region where the VM will be deployed, any redundancy options etc.</p>
<p>You select the VM image that you want to build e.g. Windows Server 2016 Datacenter. This is similar to selecting the AMI or Amazon machine image in AWS. The Size decides how many vCPUs and Memory in GiB is allocated to the VM. This is also how you will be billed per month for the VM. Note that you only pay when the VM is running. If the VM is in Stopped and Deallocated state then you are not charged for the Compute consumption of the VM. Then you are only charged for the storage component of the VM.</p>
<img src="/images/15843035325e6e8dacf35ce.png" alt="3_1 Creating a VM">
<p>When you scroll down, this is also where you provide the credentials to be used with the VM. You also configure allowed ports for the VM. E.g. RDP is opened here. You can tweak this further in the Networking section ahead.</p>
<p>The biggest difference here is that, if you own a Windows license on-premises then you can leverage that to <strong>save up to 49%</strong> in Azure via Azure Hybrid Benefit.</p>
<img src="/images/15843035705e6e8dd294530.png" alt="3_1 Creating a VM">
<p>Next, you select the VM disks. Here based on your selection you get faster and slower IOPS. You can also add new or existing disks as data disks.</p>
<img src="/images/15843035815e6e8ddde551f.png" alt="4 VM Disks">
<p>In the networking section, you can select an existing Virtual network or create a new one. You can select subnets within that network or create a new one as well. Same goes for the Public IP. This is also where you can tweak the network security group.</p>
<img src="/images/15843035995e6e8def01069.png" alt="5 VM Networking">
<p>Under the management section, you can configure Monitoring options. You can also configure Auto-shutdown configurations on the VM as well.</p>
<img src="/images/15843036465e6e8e1e54144.png" alt="6 VM Management">
<p>Under Advanced settings, you can install Microsoft or Third party extensions like Chef, Puppet etc. You can also automate one time installations in your linux VMs via Cloud init. </p>
<img src="/images/15843036705e6e8e36f01e4.png" alt="7 Advanced settings">
<p>Next, you can assign Tags to categorize the resources. These come in very handy when creating billing reports. Note that not just the VM but underlying resources like OS and Data disks, network interface cards, etc. will also be tagged.</p>
<img src="/images/15843037255e6e8e6d768e8.png" alt="8 Adding Tags">
<p>Finally, you can review the settings. Azure also gives you the option to &quot;Download a template for automation&quot;. This is your ARM Template including the JSON file, that you can leverage to automate or replicate the deployment of the instance multiple times across different environments.</p>
<img src="/images/15843037395e6e8e7b2fa0c.png" alt="9 Reviewing Settings and Creating VM">
<p>Then you are prompted with a screen where you can check the deployment status. Once succeeded you are provided with a link to directly navigate to the virtual machine resource.</p>
<img src="/images/15843037505e6e8e86bb433.png" alt="10 Checking Deployment Status">
<p>You can now view the provisioned VM under the Virtual Machines section.</p>
<img src="/images/15843037695e6e8e99a4c74.png" alt="11 Viewing the provisioned VM">
<p>For more information, refer to the below links:
<a href="https://azure.microsoft.com/en-us/services/virtual-machines/" target="_blank">Microsoft Azure Virtual Machines</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-virtual-machines-vs-ec2-instances-03-creating-azure-vms</link>
<pubDate>Thu, 02 May 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Virtual Machines vs EC2 instances - 02 - Creating EC2 Instances</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>In this post, we will look at how to create EC2 instances in Amazon Web Service (AWS). </p>
<p>You start by selecting the option to create or launch the instances.</p>
<img src="/images/15842968305e6e737ee3631.png" alt="1 Creating Instances">
<p>In the Instance creation wizard, the breadcrumb at the top show you where in the wizard you are currently at. The first screen is to choose an image for the VM or EC2 instance. The image is called an <strong>Amazon machine image</strong> or <strong>AMI</strong>. You can also search for an AMI. Under the image name, you can view the details of the image. Make sure you select the right image. Click on the Select button once decided. </p>
<img src="/images/15842969145e6e73d2658a7.png" alt="2 Chossing the Amazon Machine Image or AMI">
<p>Next, select the instance type. Here you are selecting the actual compute details i.e. the number of vCPUs and the Memory in GiB. You are billed as per the instance type you select.</p>
<img src="/images/15842969265e6e73dee9670.png" alt="3 Choosing the Instance Type">
<p>Next, you select various Instance details. This screen is very vast and makes your deployment very configurable. In this post we will not go into each of these settings. Here you can decide to create an Auto Scaling Group, request spot instances, add placement group, select domain join directory, decide shutdown behavior etc. One key setting that you select here is the <strong>Networking details</strong>. You select your VPC (Virtual Private Cloud) network and the subnet for the instance.</p>
<img src="/images/15842971165e6e749cf1a37.png" alt="4_1 Configuring Instance Details">
<img src="/images/15842971535e6e74c198196.png" alt="4_2 Configuring Instance Details">
<p>Next, you configure the Storage details. You can tweak the default storage assigned and can also add more storage. Based on the IOPS requirements you select the Volume type. You can encrypt the storage as well.</p>
<img src="/images/15842971705e6e74d26ab0e.png" alt="5 Adding Storage">
<p>Next, you add tags to your instance and underlying volumes. <strong>Tags</strong> are a great way to categorize resources. Below is just an example of a couple of tags assigned to the instance and underlying volumes. </p>
<img src="/images/15842972035e6e74f3d9532.png" alt="6 Adding Tags">
<p>Now you configure the firewall on the Instance via Security Groups. You can create a new one or select an existing one. This is where you decide what can connect to your instance and what can not. In the screenshot below the RDP connectivity on port 3389 is open. Please note that the Source of &quot;0.0.0.0/0&quot; means that it is open for all internet. It is highly recommended that you change this setting to only your IP addresses from where you will connect. </p>
<img src="/images/15842972215e6e750563278.png" alt="7 Configuring Security Group">
<p>Finally, you review the details. If you need to view the details in detail just expand that category. Click the Edit buttons on the right to edit that setting. Hit Launch to trigger the instance provisioning.</p>
<img src="/images/15842972375e6e751597c65.png" alt="8 Review details">
<p>Before the instance can be provisioned, you are prompted with a popup to select a key pair or create a new one. This is a Private and Public key pair that is used to authenticate. You download and keep the Private key. Note that you can only download the Private key at the time of creation. You will not be able to access it later.</p>
<img src="/images/15842972555e6e7527c1a67.png" alt="9 Creating or selecting Key Pair to connect to Instance">
<p>You can check the launch status on the next screen. </p>
<img src="/images/15842972725e6e75383a905.png" alt="10 Checking Launch Status">
<p>Finally, you can view and check your instance. It will be listed in the list under the Instances section under EC2 service.</p>
<img src="/images/15842972895e6e7549a0a1c.png" alt="11 Instance in the console">
<p>For more information, refer to the below links:
<a href="https://aws.amazon.com/ec2/" target="_blank">Amazon Elastic Compute Cloud (EC2)</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-virtual-machines-vs-ec2-instances-02-creating-ec2-instances</link>
<pubDate>Mon, 29 Apr 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Updates: Azure Front Door Service</title>
<description><![CDATA[<p><strong>Azure Front Door</strong> is one of the latest services announced by Microsoft. This is a game-changer as it can do lot of heavy lifting for your applications. If your application is deployed in multiple regions then Front Door service is a no-brainer for your organization. </p>
<p>It works at the application layer, i.e. layer 7. It monitors the global routing and automatically routes to the best available host to optimize the performance. The routing is based on the URL paths. It also provides instant failover at the global level. It uses smart health probes to check if your application is available or not. It can also perform SSL offloading to encrypt all the traffic.</p>
<p>To configure a Front Door, you define:</p>
<ul>
<li>Frontend host</li>
<li>Backend pools</li>
<li>Routing rules</li>
</ul>
<img src="/images/15850995455e7ab319037b6.png" alt="3 steps to configure Front Door">
<p>For more information check this official links: </p>
<ul>
<li><a href="https://azure.microsoft.com/en-us/services/frontdoor/" target="_blank">Azure Front Door</a></li>
<li><a href="https://docs.microsoft.com/en-us/azure/frontdoor/front-door-overview" target="_blank">Front Door Overview</a></li>
</ul>]]></description>
<link>https://HarvestingClouds.com/post/azure-updates-azure-front-door-service</link>
<pubDate>Sun, 28 Apr 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Updates: ExpressRoute Direct and Global Reach</title>
<description><![CDATA[<p>ExpressRoute Direct and ExpressRoute Global Reach are available to be used now. These services extend the capabilities of ExpressRoute. </p>
<p><strong>ExpressRoute Direct</strong> gives you the ability to connect directly into Microsoft’s global network at peering locations strategically distributed across the world. This is different from simple ExpressRoute connectivity in the way that now you are not just connecting to one region but getting access to the complete Microsoft global network. It helps you with massive data ingestion to storage, physical isolation, dedicated capacity, and burst capacity to use Microsoft’s global backbone to access Azure regions at tremendous scale.</p>
<p><strong>ExpressRoute Global Reach</strong> is the service where if you have two datacenters, which are located at different geo-locations and both are connected to Microsoft Azure via Express Route then these two datacenters can also connect to each other securely via Microsoft's backbone. </p>
<p>As shown in the below example, the two locations i.e. San Francisco and London are connected to Azure via ExpressRoute. Due to the Global Reach feature, now the traffic between these two datacenters can also traverse on the Microsoft backbone securely (as shown in the green color).</p>
<img src="/images/15850965485e7aa764c6048.png" alt="Global Reach">
<p>(Image source: Microsoft)</p>
<p>For more information check here: </p>
<ul>
<li><a href="https://docs.microsoft.com/en-us/azure/expressroute/expressroute-erdirect-about" target="_blank">ExpressRoute Direct</a></li>
<li><a href="https://docs.microsoft.com/en-us/azure/expressroute/expressroute-global-reach" target="_blank">ExpressRoute Global Reach</a></li>
</ul>]]></description>
<link>https://HarvestingClouds.com/post/azure-updates-expressroute-direct-and-global-reach</link>
<pubDate>Sat, 27 Apr 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure for AWS professionals - Virtual Machines vs EC2 instances - 01 - Finding resources</title>
<description><![CDATA[<p><strong>Note</strong> that this post is a part of the series. You can view all posts in this series here:<b> <a href="https://HarvestingClouds.com/post/azure-for-aws-professionals-index" target="_blank">Azure for AWS professionals - Index</a></b></p>
<p>Virtual Machines is one of the key services for Infrastructure as a Service or <strong>IaaS</strong>. It is the virtualization of the OS and allows you to run your workloads in the cloud. Microsoft and Amazon have different branding and implementation of these services. Where Microsoft calls this as <strong>Azure Virtual Machines</strong>, Amazon refers to a similar offering as <strong>Elastic Compute Cloud</strong> or <strong>EC2</strong>.</p>
<p>Let's look at where you find the EC2 services. You navigate to the All Services section in AWS and then under the <strong>Compute</strong> category, you find the <strong>EC2</strong> option. </p>
<img src="/images/15842933915e6e660f0bdd7.png" alt="AWS - EC2 under all Services">
<p>EC2 category has loads of information and your virtual machines or Instances are not shown right away. You have to click on Instances under Instances category as shown below. Then you will be able to view all your existing instances. Here you will also have the option to create a new Instance by click on &quot;<em>Launch Instance</em>&quot;. </p>
<img src="/images/15842933975e6e66155b983.png" alt="AWS - Finding Instances">
<p>Similarly, in Azure, you navigate to the 3 lines on the top left. Then you can either click on &quot;<strong>All Services</strong>&quot; (4th option) or you will see &quot;<strong>Virtual machines</strong>&quot; as an option directly. You can also drag and reorder these options as per your preference. </p>
<img src="/images/15842934035e6e661b547d6.png" alt="Azure - VMs under All Services">
<p>In the next screen, you can directly access all your <strong>Virtual machines or VMs</strong>. You can click on your VM or create a new one right from here. You just need to click on the <em>Add</em> button as shown below. </p>
<img src="/images/15842934085e6e662019800.png" alt="Azure - VMs">
<p>Next, we will dig deeper and view how the creation of these resources differ.</p>
<p>For more information, refer to the below links:
<a href="https://azure.microsoft.com/en-us/services/virtual-machines/" target="_blank">Microsoft Azure Virtual Machines</a></p>
<p><a href="https://aws.amazon.com/ec2/" target="_blank">Amazon Elastic Compute Cloud (EC2)</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-for-aws-professionals-virtual-machines-vs-ec2-instances-01-finding-resources</link>
<pubDate>Wed, 24 Apr 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>General Availability Alert - Azure AD Password Protection is now GA</title>
<description><![CDATA[<p>Azure Active Directory (AD) Password Protection feature is now generally available. </p>
<p>In nutshell, this feature provides you with the ability to enforce or audit users who set passwords to common and weak passwords. When set in enforce mode this feature stops them from being able to set a weak or common password. E.g. you could stop the end users from using a password like &quot;test123&quot; or &quot;password1&quot; etc. You can add as many general passwords to ban. As a good practice, you should add your organization's name or any common internal password to this list with any variants you believe end users may use. The end goal is to try and make the password as much difficult and secure as possible. </p>
<p>This feature also allows you to specify the thresholds to lock a user in case of failed sign-in attempts. It allows you to set the time for which the user will be locked out before they can try signing in again. </p>
<p>This feature can be accessed by navigating to the Azure Active Directory and then selecting &quot;Authentication methods&quot; under the Security section. You will see the &quot;Password Protection&quot; option opened up in its own blade. </p>
<img src="/images/15542607195ca422ef14a11.jpg" alt="Azure AD Password Protection">
<p><strong>Note</strong>: <em>Preview customers MUST update the agents to the latest version (1.2.116.0 or higher) immediately. The current agents will stop working after July 1, 2019.</em></p>
<p>For more information and the announcement blog from Microsoft, read here: <a href="https://techcommunity.microsoft.com/t5/Azure-Active-Directory-Identity/Azure-AD-Password-Protection-is-now-generally-available/ba-p/377487" target="_blank">Azure AD Password Protection is now generally available!</a></p>]]></description>
<link>https://HarvestingClouds.com/post/general-availability-alert-azure-ad-password-protection-is-now-ga</link>
<pubDate>Wed, 03 Apr 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Awareness Month - Webinar #5 Video - How to Use Azure Site Recovery for Backup, Migration, and Disaster Recovery</title>
<description><![CDATA[<p>This last session in the Azure Awareness month covered Azure Site Recovery services which include Azure Backup and Azure Recovery Services. The session discussed how to provide Backup and Disaster Recovery. It also provided a brief strategy overview for Migration. As usual this is demo heavy session.</p>
<p>You can check the video here:</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/NpAc1XFdkJM" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>]]></description>
<link>https://HarvestingClouds.com/post/azure-awareness-month-webinar-5-video-how-to-use-azure-site-recovery-for-backup-migration-and-disaster-recovery</link>
<pubDate>Sat, 30 Mar 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Windows Virtual Desktop in Azure - Public Preview Alert</title>
<description><![CDATA[<p><strong>Windows Virtual Desktop</strong> is a new service that is available as a Public Preview at the time of writing of this blog. This is a desktop and app virtualization service that runs on the cloud.</p>
<h3>Pre-requisites</h3>
<p>You need the following to be able to use this service:</p>
<ol>
<li>Azure AD (Active Directory) - If you have an Azure subscription, then you already have this. </li>
<li>Windows Server Active Directory in sync with Azure Active Directory</li>
<li>Azure subscription, containing a virtual network that either contains or is connected to the Windows Server Active Directory - this is where the virtualization will be created and configured</li>
</ol>
<p>Additionally, the Azure VM you create for Windows Virtual Desktop must be:</p>
<ol>
<li>Standard domain-joined or Hybrid AD-joined. Virtual machines can't be Azure AD-joined.</li>
<li>Running one of the following supported OS images: Windows 10 Enterprise multi-session or Windows Server 2016</li>
</ol>
<h3>Working with the Service</h3>
<p>At a very high level the steps involve the following:</p>
<ol>
<li>Grant Azure Active Directory permissions to the Windows Virtual Desktop Preview service</li>
<li>Assign the TenantCreator application role to a user in your Azure Active Directory tenant</li>
<li>Create a Windows Virtual Desktop Preview tenant</li>
<li>Create a Windows Virtual Desktop Host Pool - available as an offering from the Azure Marketplace right now</li>
</ol>
<img src="/images/15540720605ca141fc3f08c.png" alt="Provision a host pool">
<p>You can learn more about this preview and detailed step by step getting started guide here: <a href="https://docs.microsoft.com/en-us/azure/virtual-desktop/overview" target="_blank">Windows Virtual Desktop Preview</a></p>]]></description>
<link>https://HarvestingClouds.com/post/windows-virtual-desktop-in-azure-public-preview-alert</link>
<pubDate>Mon, 25 Mar 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Awareness Month - Webinar #4 Video - Security in Azure - How to Make Your Environment More Secure</title>
<description><![CDATA[<p>The fourth session in the Azure Awareness month was all about security in Azure. We discussed different options around security features available in Azure. The focus of this session was to make your environment more secure. </p>
<p>You can check the video recording here:</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/L95Ax9W-L2c" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>]]></description>
<link>https://HarvestingClouds.com/post/azure-awareness-month-webinar-4-video-security-in-azure-how-to-make-your-environment-more-secure</link>
<pubDate>Sat, 23 Mar 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Differences between Azure Functions and Azure Logic Apps</title>
<description><![CDATA[<p><strong>Azure Functions</strong> and <strong>Azure Logic Apps</strong> are key offerings from Microsoft Azure that provide you <strong>Serverless Computing</strong>. They both can help run business logic for your applications and can also automate a lot of tasks in Azure. Azure Functions which can execute code in almost any modern language. Whereas Azure Logic Apps which are designed in a web-based designer and can execute logic triggered by Azure services without writing any code.</p>
<p>The major difference between the two is that Azure Functions provide you with lot more control over the code that gets executed but you have to author and manage that code. Whereas with Azure Logic Apps you are working only in a designer and configuring the activities. You get less control but at the same time, you do not have to manage the underlying code as well. You only need to manage the properties of your activities (or <em>logic blocks</em>) in your <strong><em>workflow</em></strong>.</p>
<p>Additionally, here are more differences between these two services:</p>
<style>
table {
  font-family: arial, sans-serif;
  border-collapse: collapse;
  width: 100%;
}

td, th {
  border: 1px solid #dddddd;
  text-align: left;
  padding: 8px;
}

tr:nth-child(even) {
  background-color: #dddddd;
}
</style>
<table>
<thead>
<tr>
<th></th>
<th>Functions</th>
<th>Logic Apps</th>
</tr>
</thead>
<tbody>
<tr>
<td><b>State</b></td>
<td>Normally stateless, but Durable Functions provide state</td>
<td>Stateful</td>
</tr>
<tr>
<td><b>Development</b></td>
<td>Code-first (imperative)</td>
<td>Designer-first (declarative)</td>
</tr>
<tr>
<td><b>Connectivity</b></td>
<td>About a dozen built-in binding types, write code for custom bindings</td>
<td>Large collection of connectors, Enterprise Integration Pack for B2B scenarios, build custom connectors</td>
</tr>
<tr>
<td><b>Actions</b></td>
<td>Each activity is an Azure function; write code for activity functions</td>
<td>Large collection of ready-made actions</td>
</tr>
<tr>
<td><b>Monitoring</b></td>
<td>Azure Application Insights, Log Analytics via Application Insights connector</td>
<td>Azure portal, Log Analytics</td>
</tr>
<tr>
<td><b>Management</b></td>
<td>REST API, Visual Studio</td>
<td>Azure portal, REST API, PowerShell, Visual Studio</td>
</tr>
<tr>
<td><b>Execution context</b></td>
<td>Can run locally or in the cloud</td>
<td>Runs only in the cloud.</td>
</tr>
</tbody>
</table>]]></description>
<link>https://HarvestingClouds.com/post/differences-between-azure-functions-and-azure-logic-apps</link>
<pubDate>Tue, 19 Mar 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Awareness Month - Webinar #3 Video - ARM Template Basics: Automated & Repeatable Deployments in Microsoft Azure</title>
<description><![CDATA[<p>This third webinar in the Azure Awareness month series was around ARM Template Basics. In this session, I talked about how to perform automated &amp; repeatable deployments in Microsoft Azure leveraging ARM Templates. We discussed how to author ARM Templates as well as how to kick start the development. </p>
<p>You can check the video recording here:</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/MZTb9NtZOh4" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>]]></description>
<link>https://HarvestingClouds.com/post/azure-awareness-month-webinar-3-video-arm-template-basics-automated-repeatable-deployments-in-microsoft-azure</link>
<pubDate>Sun, 17 Mar 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Awareness Month - Webinar #2 Video - Automate Your Azure Environment with Azure Automation and Azure Functions</title>
<description><![CDATA[<p>In this session around automating your environment, we looked at how to work with Azure Automation and Azure Functions. The session included demos to get you started in these technologies. You can check the video below. </p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/Bl_6atthAOk" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>]]></description>
<link>https://HarvestingClouds.com/post/azure-awareness-month-webinar-2-video-automate-your-azure-environment-with-azure-automation-and-azure-functions</link>
<pubDate>Sat, 09 Mar 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Awareness Month - Webinar #1 Video - Getting Started with Azure – Basics, Tips, and Tricks</title>
<description><![CDATA[<p>We started the Azure Awareness month with the first webinar in the series last Friday. This session was about &quot;Getting Started with Azure – Basics, Tips, and Tricks&quot;. If you missed this session then you can watch this recorded webinar below on YouTube.</p>
<p>In this session we talked about:</p>
<ol>
<li><strong>Cloud Computing and Azure</strong> - to understand the place of Microsoft Azure in the Cloud Computing world and how we fit in</li>
<li><strong>Basic Concepts in Azure</strong> - to lay the right foundation on which all concepts will be built</li>
<li><strong>General Services in Azure</strong> - to provide you with the most essential services in Microsoft Azure</li>
<li><strong>Tips &amp; Tricks</strong> - to give you that edge and make you more efficient in usage of Microsoft Azure</li>
</ol>
<iframe width="560" height="315" src="https://www.youtube.com/embed/5X1UxHmdyUA" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>]]></description>
<link>https://HarvestingClouds.com/post/azure-awareness-month-webinar-1-video-getting-started-with-azure-basics-tips-and-tricks</link>
<pubDate>Sun, 03 Mar 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Tips & Tricks -  Series Index</title>
<description><![CDATA[<p>Microsoft Azure has lots of services to offer. Sometimes you can feel overwhelmed and lost. These tips and tricks will be your guiding star to make your tasks a bit easier. These will increase your efficiency. Even if you save a few seconds on the task at hand it can accumulate to huge time savings over a larger period of time. </p>
<p>This blog is an <strong>Index</strong> of various blogs in the series &quot;Azure Tips &amp; Tricks&quot;:</p>
<ol>
<li><a href="https://HarvestingClouds.com/post/azure-tips-tricks-quickly-run-commands-or-script-on-an-azure-virtual-machine-vm-without-logging-into-the-vm/" target="_blank">Quickly run commands or script on an Azure Virtual Machine (VM) without logging into the VM</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-tips-tricks-bootstrap-automation-for-new-resources-for-repeated-deployments-in-multiple-environments/" target="_blank">Bootstrap Automation for <strong>New</strong> resources for repeated deployments in multiple environments</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-tips-tricks-bootstrap-automation-for-existing-resources-for-repeated-deployments-in-multiple-environments/" target="_blank">Bootstrap Automation for <strong>Existing</strong> resources for repeated deployments in multiple environments</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-tips-tricks-find-solutions-to-most-common-problems-for-any-resource/" target="_blank">Find solutions to most common problems for any resource</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-tips-tricks-quickly-navigate-azure-portal-and-search-documentation-with-ease/" target="_blank">Quickly navigate Azure Portal and search documentation with ease</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-tips-tricks-quickly-assign-and-check-the-azure-policy-and-initiative-compliance-status-of-your-resources-with-new-azure-feature/" target="_blank">Quickly assign and check the Azure Policy and Initiative Compliance status of your resources with new Azure feature</a></li>
</ol>]]></description>
<link>https://HarvestingClouds.com/post/azure-tips-tricks-series-index</link>
<pubDate>Mon, 25 Feb 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Tips & Tricks - Quickly assign and check the Azure Policy and Initiative Compliance status of your resources with new Azure feature</title>
<description><![CDATA[<p>Now you can assign policies, initiatives directly from the resource blade. You can also check the compliance status of the policies and take corrective actions on them.
You can now access this feature, not just from the <strong>Subscriptions</strong> blade but also from the <strong>Resource Group</strong> and <strong>individual resources</strong> blade.</p>
<p>To access this information from the Resource Group level, just navigate to your resource group and search for &quot;<strong>Policies</strong>&quot; option under Settings of the blade. </p>
<img src="/images/15513074625c7712c6bf854.png" alt="Policies Setting at Resource Group level">
<p>To access this from a resource e.g. a Virtual Machine, navigate to the VM and search for &quot;Policies&quot; setting. This is under &quot;Operations&quot; section of the VM. </p>
<img src="/images/15513075245c7713047de46.png" alt="Policies Setting at Resource level">
<p>In the above example, the policy to apply tag is not compliant. You can click on individual policy names and investigate further.  </p>
<p>As soon as I apply the tags and the policies compliance is checked next time, it will change the status to Compliant (as shown below). As a best practice, you should be reviewing the compliance status of your environment at periodic intervals. </p>
<img src="/images/15513126205c7726ec59bc7.png" alt="Policies Compliance status after taking corrective action">]]></description>
<link>https://HarvestingClouds.com/post/azure-tips-tricks-quickly-assign-and-check-the-azure-policy-and-initiative-compliance-status-of-your-resources-with-new-azure-feature</link>
<pubDate>Thu, 21 Feb 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Updates - New VM-series for Confidential Computing</title>
<description><![CDATA[<p>A new VM-series backed by specialized hardware, which will include the latest generation of Intel SGX. </p>
<ul>
<li>Based on Trusted Execution Environments: Intel SGX, Virtualization Based Security (VBS) </li>
<li>Comm application patterns: Protect data confidentiality, integrity, and sensitive IP </li>
<li>Protect data and code in use: Isolated portion of processor and memory, code and data cannot be viewed/modified </li>
<li>Cloud offering: TEE-enabled compute platform, cloud attestation, first-party‒enabled services </li>
<li>Centrally combine data sources, Communicate with secure endpoints, licensing and DRM</li>
</ul>
<img src="/images/15538714305c9e324654528.png" alt="Marking a Service as Favorite">
<p>At the time of writing of this blog, this feature is limited preview access under NDA (announced via blog) of a specialized Virtual Machine Series (DC-series) that will become part of the  Azure Compute portfolio and the first installment of the broader Azure Confidential Computing (ACC) initiative. Put simply, confidential computing offers protection that to date has been missing from public clouds: <strong>encryption of data while in use</strong>.</p>
<p>You can read more about this here: <a href="https://azure.microsoft.com/en-us/blog/azure-confidential-computing/" target="_blank">Azure confidential computing
</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-updates-new-vm-series-for-confidential-computing</link>
<pubDate>Mon, 04 Feb 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Tips & Tricks - Quickly navigate Azure Portal and search documentation with ease</title>
<description><![CDATA[<p>Navigating Azure Portal can become challenging. You want to navigate to different Azure Resources or Resource Groups or want to check a particular Service in Azure. Instead of going to &quot;All Services&quot; and searching for your resource, if you can save a little time by searching more efficiently, it can add up over time and make you more efficient as well. So here are few tips on how you can navigate with ease in Azure.</p>
<h3>1. Pin/Favorite most common Services and re-ordering them</h3>
<p>For a recent project, I was working with Azure Route Table a lot. I was navigating to these multiple times and making a lot of changes. Instead of going to &quot;All Services&quot; and searching for Route Table service every time, you search for it only once and Star it to mark it as Favorite. It will start showing up in the Services list to the left under Favorites.  </p>
<img src="/images/15513414255c779771e2940.png" alt="Marking a Service as Favorite">
<p>Next, you want to re-order the Services under Favorites, so you don't have to scroll to navigate to most common services. You can do so by hovering the cursor on the service name under Favorites, and then click and drag on the ellipse (i.e. 3 dots to the right of the service name) as shown below. </p>
<img src="/images/15513414435c7797839cd64.png" alt="Re-ordering the service in Favorites">
<p>Now you will have the most commonly visited service at the tip of your hands. </p>
<h3>2. Pin most common Resources to your dashboard</h3>
<p>Another scenario I faced in one of the projects was that every time I logged in, I had to make some changes to a VM and connect to the same using it's dynamically assigned Public IP. That meant navigating to this VM, every day and starting the VM and connecting to it. I had to start the VM as auto-shutdown was configured on the VM to save cost. Instead of navigating to the Virtual Machines and then searching for the VM and then navigating to it, you can simply Pin the VM to your dashboard. Next time you open the portal, the VM will be right there on the Dashboard. </p>
<img src="/images/15513414555c77978fbf2b8.png" alt="Pinning a resource to a dashboard">
<p>You can similarly Pin other resources like SQL Databases, etc.</p>
<h3>3. Searching for Resources, Resource Groups, Services and Documentation using the new Search bar</h3>
<p>One of the cool features I stumbled upon is the new search bar in the Azure portal. It had made navigating Azure portal so much easier. You just start searching for the resource or Resource Group or Service name in the text box at the top of the portal. You don't even have to type the whole thing. Just start typing and the results will start popping up. Look for different sections in the suggestions box the pops up under the text box. Click on your desired resource or Resource Group or Service. </p>
<img src="/images/15513414665c77979a37437.png" alt="Navigating the smart way">
<p>One advantage of this search box is that you can not just search for resources or resource groups or services, you can also search for Marketplace solutions and Documentation. I have leveraged this feature to search for documentation a lot. You can't remember everything and you have to reference documentation every now and then. Instead of googling the documentation, you can quickly search for it, without leaving the Azure portal and access it. You don't have to worry about navigating away from the Azure portal. The documentation will open up in a separate tab automatically. </p>
<p>I hope that these tips will help you increase your efficiency. Let us know which tip helped you and which you already knew about, in the comments below. </p>]]></description>
<link>https://HarvestingClouds.com/post/azure-tips-tricks-quickly-navigate-azure-portal-and-search-documentation-with-ease</link>
<pubDate>Fri, 01 Feb 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Tips & Tricks - Find solutions to most common problems for any resource</title>
<description><![CDATA[<p>When working with Azure resources, you will be faced with the challenge of diagnosing, troubleshooting and solving issues related to your resources. Opening a Microsoft Support case should be the last resort. </p>
<p>This tip is generic in nature that shows you where to look for various solutions to the most common problems related to a resource. You can find this option under any resource as &quot;<strong>Diagnose and solve problems</strong>&quot;</p>
<img src="/images/15513430465c779dc6376de.png" alt="Diagnose and solve problems">
<h3>Troubleshooting Workflow</h3>
<p>Using this a common troubleshooting workflow will look like this:</p>
<ol>
<li>Click on the &quot;<em>Diagnose and solve problems</em>&quot; option under your resource's blade.</li>
<li>Look for Resource Health. I have had a scenario where after a lot of troubleshooting around Azure Automation, we came to know that the service was experiencing issues in our intended region (US South Central). A quick check from this will rule out the obvious.</li>
<li>Next, you want to check the &quot;<strong>Activity logs</strong>&quot;. You can check those right from this screen. You can find if anybody changed the configurations which might have resulted in errors you are observing. </li>
<li>Next section shows you some of the <strong>most common issues</strong> and how to resolve these. Scroll through these. Sometimes these include step by step guides to help you. Follow these and ensure you did not miss a step.</li>
<li>If nothing works, you can finally open a <strong>Microsoft Support request</strong> right from this screen. <strong>Note</strong>: If you can't find your issue under common problems, I highly recommend that you search for it in forums like StackOverflow etc. before you open the Microsoft Support request. If it is a business critical issue, then I would recommend that you open the support ticket first and then search for the issue in the interest of time. </li>
</ol>]]></description>
<link>https://HarvestingClouds.com/post/azure-tips-tricks-find-solutions-to-most-common-problems-for-any-resource</link>
<pubDate>Fri, 25 Jan 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Tips & Tricks - Bootstrap Automation for Existing resources for repeated deployments in multiple environments</title>
<description><![CDATA[<p>When you want to automate resource deployment you will either create the automation templates for an existing set of resources or you will create templates for resources that have not been created yet. For former situation, e.g. you have performed a Proof of Concept and now you want to automate the deployment of these set of resources. </p>
<p>You want to automate this because you want to deploy these resources multiple times at different times. You want to deploy these in a predictive fashion without any errors due to manual mistakes creeping in. The number of resources can be fairly large and you will actually end up saving time on the creation of these automation templates instead of manually creating everything.</p>
<h3>Generating Automation for an existing set of resources</h3>
<p>To generate automation for an existing set of resources, either go to the resource group or to any of the resource. Under <strong>Settings</strong> look for the option for &quot;<strong>Automation script</strong>&quot;. Depending upon number of resources and type of resources, it may take some time to generate the template.</p>
<p><strong>Note</strong>: Even though the option says &quot;Automation script&quot; it still generates an ARM Template for the automation. </p>
<img src="/images/15513440755c77a1cb534d8.png" alt="Automation Template generation for the resources">
<p>Once the template is ready you can do the following:</p>
<ol>
<li>You can inspect the template right there on the screen.</li>
<li>You can download the template or directly add it to your library. Once in the library, you can modify it and share it with your team. Or you can directly Deploy it by clicking on the deploy button.</li>
<li>Look out for the errors regarding the resources that can't be exported. E.g. Key vault policies can't be exported by this wizard.</li>
<li>Next, the Template and Parameters are shown. You can also view, Azure CLI, PowerShell, .Net or Ruby code samples to deploy the template.</li>
<li>Make sure that you inspect the template thoroughly. Remove the resources that you don't need. </li>
<li>Also, there will be a lot of hard-coded values in this auto-generated template. It will still be better than starting everything from scratch and doing everything manually. </li>
</ol>]]></description>
<link>https://HarvestingClouds.com/post/azure-tips-tricks-bootstrap-automation-for-existing-resources-for-repeated-deployments-in-multiple-environments</link>
<pubDate>Thu, 24 Jan 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Tips & Tricks - Bootstrap Automation for New resources for repeated deployments in multiple environments</title>
<description><![CDATA[<p>When you want to automate resource deployment you will either create the automation templates for an existing set of resources or you will create templates for resources that have not been created yet. For the latter situation, e.g. automating the deployment of a new set of resources, you will have to start the automation from scratch. Instead of starting from scratch, you can bootstrap your automation via ARM Templates.</p>
<h3>Reuse Existing</h3>
<p>You can first search for your resources or scenarios in the Microsoft Quick Start templates. Note that these are sample templates from Microsoft and community members. These templates can be found here: <a href="https://github.com/Azure/azure-quickstart-templates" target="_blank">Azure Quickstart Templates</a>. </p>
<p>When you click on a template name you will see multiple files as shown below. Actual template file will always be named as &quot;<strong><em>azuredeploy.json</em></strong>&quot;. The related parameters file will be named as &quot;<strong><em>azuredeploy.parameters.json</em></strong>&quot;. All other files are optional and are the supporting files for the template.</p>
<img src="/images/15513749945c781a929e78b.png" alt="Sample Template in Quickstart Repo on GitHub">
<p>You can download these files and edit them. You can fork them directly in GitHub and start editing in a source control environment, but your changes will be public (unless you change your repository to Private). If you do not want to make changes and want to directly deploy one of the template, simply click on the &quot;<strong>Deploy to Azure</strong>&quot; button. This will open Azure portal and you will need to sign into your subscription. Then the template deployment blade will open with the template information already populated. You will just need to select right Subscription, Resource Group and provide various parameters. </p>
<p>It is really simple and saves you lot of time instead of authoring the template from scratch. Why re-invent the wheel. Reuse instead!</p>
<h3>Generating template for new resource creation in Azure</h3>
<p>If you are creating new resource from the Azure portal and you have to perform this same action multiple times, you should consider using an ARM Template. If you want to create an ARM Template for a resource for which you can't find a template, even then you can use Azure portal to bootstrap the template creation, rather than starting from scratch. </p>
<p>Simple go ahead and start creating your resource from Azure portal. At the last screen, instead of actually creating the resource, look for a link with the name as &quot;<strong>Download a template for automation</strong>&quot;. This will open another blade which will show you the template. You can download the template from here. It will also include the related parameters file and a script to perform the deployment of your template. The below screenshot shows you the option for Storage Account creation.</p>
<img src="/images/15513748915c781a2b8fa9a.png" alt="Option for Download a template for automation">
<p><strong>Note</strong>: This option will not be available for all resource types. Microsoft is adding this option to more and more resources everyday. </p>]]></description>
<link>https://HarvestingClouds.com/post/azure-tips-tricks-bootstrap-automation-for-new-resources-for-repeated-deployments-in-multiple-environments</link>
<pubDate>Tue, 15 Jan 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Tips & Tricks - Quickly run commands or script on an Azure Virtual Machine (VM) without logging into the VM</title>
<description><![CDATA[<p>When you are working with Azure Virtual Machines (VM), multiple times you will need to run some command on the VM. For this, you will need to connect to the VM, login and then open the console to run the command and then actually execute it. Instead of going through the trouble of connecting to the VM and logging in during the connection process, you can simple Run the command on the VM directly from the Azure portal. </p>
<p><strong>Note 1</strong>: This feature uses the Azure VM Agent on the VM and will not work if the agent is missing. </p>
<p>To leverage this option navigate to your VM in the portal and select the option to &quot;<strong>Run Command</strong>&quot; under &quot;Operations&quot; section of the VM.</p>
<img src="/images/15513791435c782ac72f69d.png" alt="Run Command Option">
<p>As shown above, you can Run Complete PowerShell script right from this screen. There are various options for common scenarios like checking IP Config which simply executes &quot;<em>ipconfig /all</em>&quot;. You can check these common scenarios and the underlying script by clicking on them. If you want to execute then simply click &quot;Run&quot; in the popup blade. </p>
<img src="/images/15513791525c782ad04aff0.png" alt="Running common scenarios">
<p><strong>Note 2</strong>: You will need to wait for the Output to appear. It takes some time and you just have to be patient. Once you click run, it gets disabled and the operation is running. Once the operation is complete you will see the output in the window. Do not navigate away from this popup blade/window. </p>
<h3>Example of Running PowerShell Script</h3>
<p>Let us say that you want to install IIS on the VM for quick testing. This can be done easily by running the below PowerShell:</p>
<pre><code>Install-WindowsFeature -name Web-Server -IncludeManagementTools</code></pre>
<p>Click on the &quot;<strong>RunPowerShellScript</strong>&quot; option. In the popup blade, type the script and hit Run. Wait for the output to appear. Do not navigate away from the window. You can check the status as indicated below.</p>
<img src="/images/15513791605c782ad85f19e.png" alt="Running any PowerShell Script">
<p>That's all there is to it. For more information you can check the official documentation that can be found here: <a href="https://docs.microsoft.com/en-us/azure/virtual-machines/windows/run-command" target="_blank">Run PowerShell scripts in your Windows VM with Run Command</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-tips-tricks-quickly-run-commands-or-script-on-an-azure-virtual-machine-vm-without-logging-into-the-vm</link>
<pubDate>Thu, 10 Jan 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Script Sample - Disassociate and Associate Subnets to Route Tables</title>
<description><![CDATA[<p>If you need to test connectivity of your environment without the User Defined Routes then the easiest and most non-disruptive way is to simply disassociate the subnets from the Route Tables. Once your testing is complete, then you can simply reassociate the subnets back to the original Route Tables. </p>
<p><strong>Note</strong>: Use these scripts with caution as these will affect the networking and communication between resources. Go through the scripts and ensure that you know what the scripts are doing before executing these in your environment. Also, always test these in a dev environment before running in any other environment. </p>
<h3>Script samples and how they work</h3>
<p>The scripts in this sample are:</p>
<ol>
<li><strong>Disassociate-SubnetsFromUDRs.ps1</strong> - This script disassociates the subnets from the Route Tables</li>
<li><strong>Associate-SubnetsToUDRs.ps1</strong> - This script reassociates the subnets back to the Route Tables. This uses a CSV file as an input. A sample of the CSV file is also provided along with the script. </li>
</ol>
<p>The <strong>disassociation script</strong> fetches the Route Tables and then gets the associated Subnet details from there. It then removes the association by setting the route table property on the subnet to <strong><em>$null</em></strong> as shown below:</p>
<pre><code>Set-AzureRmVirtualNetworkSubnetConfig `
-Name $subnetName `
-VirtualNetwork $virtualNetwork `
-AddressPrefix $subnetAddressPrefix `
-RouteTable $null |
Set-AzureRmVirtualNetwork</code></pre>
<p>The <strong>association script</strong> works in an exactly opposite way. The major difference is that this script fetches the information from a CSV file as provided along with the sample. This can also be autogenerated (prior to disassociation). The script sample for generating this CSV can be found here: <a href="https://HarvestingClouds.com/post/script-sample-generate-report-for-route-tables-with-associated-subnets-and-related-information/" target="_blank">Generate Report for Route Tables with associated Subnets and related information</a></p>
<p>The command for creating the association of Subnet to the Route Table looks like below. </p>
<pre><code>Set-AzureRmVirtualNetworkSubnetConfig `
-Name $subnetName `
-VirtualNetwork $virtualNetwork `
-AddressPrefix $subnetAddressPrefix `
-RouteTable $routeTable |
Set-AzureRmVirtualNetwork </code></pre>
<h3>Location of the Scripts</h3>
<p>You can find this script in GitHub at this location: <a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/User%20Defined%20Routes%20(UDRs)%20or%20Route%20Tables%20Related%20Scripts" target="_blank">User Defined Routes (UDRs) or Route Tables Related Scripts</a></p>]]></description>
<link>https://HarvestingClouds.com/post/script-sample-disassociate-and-associate-subnets-to-route-tables</link>
<pubDate>Mon, 07 Jan 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Script Sample - Generate Report for Route Tables with associated Subnets and related information</title>
<description><![CDATA[<p>When you have custom User Defined Routes (UDRs) in your environment on your Route Tables, you will have these Route Tables associated with various Subnets. If you want to <strong>report on the association of Route Tables with Subnets</strong> and any related information then you can use this sample script. </p>
<p>One use case is to use this script to export the associations so that if any association gets deleted then you can recreate later by reference to this report. Another script sample uses the output of this script to perform the association of the subnets to the Route Tables. That sample can be found here: <a href="https://HarvestingClouds.com/post/script-sample-disassociate-and-associate-subnets-to-route-tables/" target="_blank">Disassociate and Associate Subnets to Route Tables</a></p>
<h3>Script Requirements and Workings</h3>
<p>The script only requires you to update the path for the output csv report. </p>
<p>The script connects to the Azure and fetches all Route Tables by using the below command.</p>
<pre><code>$routeTables = Get-AzureRmRouteTable</code></pre>
<p>It then finds all the subnet linked on those Route Tables by using the Subnets property on the route table object as shown below.</p>
<pre><code>$routeSubnets = $routeTable.Subnets</code></pre>
<p>The script then iterates over all the subnets one by one. It then finds the related subnet information like Subscription id, the virtual network of the subnet and the Resource Group of the Virtual Network. It saves all this information to an array object named <strong>$results</strong>.</p>
<p>Finally, the script outputs all the results using the Export-Csv cmdlet:</p>
<pre><code>$results | Export-Csv -Path $PathToOutputCSVReport -NoTypeInformation</code></pre>
<h3>Location of the Script</h3>
<p>You can find this script in GitHub at this location: <a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/User%20Defined%20Routes%20(UDRs)%20or%20Route%20Tables%20Related%20Scripts" target="_blank">Report-UDRsWithSubnetInfo.ps1</a></p>]]></description>
<link>https://HarvestingClouds.com/post/script-sample-generate-report-for-route-tables-with-associated-subnets-and-related-information</link>
<pubDate>Sun, 06 Jan 2019 00:00:00 +0500</pubDate>
</item>
<item>
<title>Script Sample - Removing Locks from Azure Resources</title>
<description><![CDATA[<p>In one of the earlier posts, we discussed the benefits of using locks and why you should apply these (to avoid accidental deletion) to any critical and important resources in Azure. We also saw a script sample to apply the locks on all key resources in your environment. The same can be reviewed again here: <a href="https://HarvestingClouds.com/post/script-sample-apply-locks-on-various-azure-resources/" target="_blank">Apply Locks on Various Azure Resources</a>.</p>
<p>Now once you have locks, you won't be able to remove the resources or remove configurations from them. To be able to remove configurations or to remove the resources itself you will need to remove the locks. You can do so in an automated fashion by using this script sample.</p>
<h3>Script Requirements and Workings</h3>
<p>The script does not have any specific requirements. </p>
<p>The script sample removes lock from all Azure Route Tables, but the same concept can be applied to any type of resource. It first fetches the Route Tables by using the below command:</p>
<pre><code>$routeTables = Get-AzureRmRouteTable</code></pre>
<p>Then it simply removes the lock by using the below command:</p>
<pre><code>Remove-AzureRmResourceLock -LockName DoNotDelete -ResourceGroupName $routeTable.ResourceGroupName -ResourceName $routeTable.Name -ResourceType $routeTable.Type -Force</code></pre>
<h3>Location of the Script</h3>
<p>You can find this script in GitHub at this location: <a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Remove-Locks" target="_blank">Remove-Locks.ps1</a></p>]]></description>
<link>https://HarvestingClouds.com/post/script-sample-removing-locks-from-azure-resources</link>
<pubDate>Sun, 16 Dec 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Script Sample - VM Operations - Working with VM Snapshots and moving the Managed VMs across subscriptions and regions</title>
<description><![CDATA[<p>This script sample is a combination of two scripts. At the time of the writing of this blog, the feature to move managed disks or images built using Managed disk VMs is not available in Azure. A workaround for this is as follows.</p>
<h3>The two scripts</h3>
<p>The two scripts provided are: </p>
<ol>
<li><strong>Copy-SnapshotToStorage.ps1</strong> - This copies the Snapshot to a Storage account. This storage account can be in different subscription or even different region. The script uses the &quot;<strong><em>Start-AzureStorageBlobCopy</em></strong>&quot; cmdlet and also shows the progress of the copy operation. </li>
<li><strong>Create-VMFromSnapshot.ps1</strong> - This creates a VM from a Snapshot in a Storage account. It creates a Managed Disks VM leveraging the snapshot.</li>
</ol>
<h3>Scenario 1 - Moving a Managed VM or a Disk across subscriptions or regions</h3>
<p>You will need to follow the below steps to move a VM built using Managed disks across subscriptions or regions: </p>
<ol>
<li>First, create a snapshot on the VM</li>
<li>Then move the snapshot to a Storage account in the target area. Use the first script here.</li>
<li>Then create a VM using the snapshot by leveraging the second script provided in this post.</li>
</ol>
<h3>Scenario 2 - Moving the Image across subscriptions or regions</h3>
<p>You will need to follow the below steps to move an Image built using Managed disks across subscriptions or regions: </p>
<ol>
<li>Create a new VM using the image</li>
<li>Create a snapshot on the VM</li>
<li>Move the snapshot across to a Storage Account in the target area. Use the first script here.</li>
<li>Create a VM using the snapshot in the target area using the second script.</li>
<li>Finally, capture the VM to an image. Delete the image and snapshots from both source and target once image is created in the target area.</li>
</ol>
<h3>Location of the Script</h3>
<p>You can find this script in GitHub at this location: <a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Working%20with%20Azure%20VM%20Snapshots" target="_blank">Working with Azure VM Snapshots</a></p>]]></description>
<link>https://HarvestingClouds.com/post/script-sample-vm-operations-working-with-vm-snapshots-and-moving-the-managed-vms-across-subscriptions-and-regions</link>
<pubDate>Fri, 14 Dec 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Script Sample - VM Operations - Export VM Configurations</title>
<description><![CDATA[<p>Exporting a VM configuration is a best practice that you should be doing before altering any configurations on the Virtual Machine. Any action that can corrupt the VM or its configurations should be preceded by exporting of VM's configurations. This allows you to be able to refer back to these configurations and recreate the VM in case of any unexpected scenario. </p>
<p>With the Script sample in this post, you can take the export of multiple VMs by providing the same in a csv configuration file. This file is very simple and the creation of this file itself can also be automated. </p>
<p>The script can also be altered to take an export of all the VMs in the environment. </p>
<h3>Script Requirements</h3>
<p>The script requires you to populate the configurations CSV file. The sample CSV file is also provided along with the script. This configurations file should have the following columns:</p>
<ol>
<li>Computer - Name of the VM in Azure</li>
<li>ResourceGroupName - Resource Group of the VM</li>
</ol>
<h3>Script Working</h3>
<p>This is a very simple but one of the most reusable script. It simply takes the export in two steps.</p>
<p>First it fetches the VM as shown below:</p>
<pre><code>$currentVm = Get-AzureRmVM -ResourceGroupName $ResourceGroupName -Name $virtualMachineName -ErrorAction Stop</code></pre>
<p>Then, it takes the export by using the below command.</p>
<pre><code>$currentVm | ConvertTo-Json -Depth 100 | Out-File -FilePath $fileName</code></pre>
<p><strong>The Trick</strong>: The most important thing is the Depth parameter when using ConvertTo-Json cmdlet. If you do not specify this then nested properties may not export properly. Specifying a large enough depth ensures that all nested properties are also exported. </p>
<p>The script encapsulates all this logic into a reusable function. It also provides the best practices template to invoke this function.
<strong>Note</strong>: Check the comments marked with &quot;<strong>ToDo</strong>&quot; where you should be making the changes. </p>
<h3>Location of the Script</h3>
<p>You can find this script in GitHub at this location: <a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Azure%20VM%20Operations" target="_blank">Export-VMConfig.ps1</a></p>]]></description>
<link>https://HarvestingClouds.com/post/script-sample-vm-operations-export-vm-configurations</link>
<pubDate>Tue, 11 Dec 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Script Sample - VM Operations - Apply HUB Licensing to Existing VMs</title>
<description><![CDATA[<p>If you have older VMs in the environment and you wan to leverage Microsoft's Hybrid Use Benefit i.e. HUB licensing then this script can update multiple VMs in your environment in an automated way for you. </p>
<p><strong>NOTE</strong>: </p>
<ul>
<li>Before you apply the HUB licensing, please make sure whether you are eligible for HUB licensing benefits or not. Refer the official documentation here: <a href="https://azure.microsoft.com/en-ca/pricing/hybrid-benefit/" target="_blank">Azure Hybrid Benefit</a></li>
</ul>
<h3>Script Requirements</h3>
<p>The script requires you to populate the configurations CSV file. The sample CSV file is also provided along with the script. This configurations file should have the following columns:</p>
<ol>
<li>Computer - Name of the VM in Azure</li>
<li>OSType - Windows or Linux</li>
<li>ResourceGroupName - Resource Group of the VM</li>
<li>HUBLicensingNeeded - yes or no. The HUB Licensing is set for only the VMs for which this value is set to yes.</li>
</ol>
<h3>Script Working</h3>
<p>In its entirety, the HUB licensing is just a switch for &quot;<strong>LicenseType</strong>&quot; on the VM. For Windows servers, if it's value is set to &quot;<strong>Windows_Server</strong>&quot; then this means that you own a Windows Server license and are claiming the HUB Licensing benefits on this VM. </p>
<p>The script applied the HUB licensing in 3 steps. First, it fetches the VM.</p>
<pre><code>$currentVm = Get-AzureRmVM -ResourceGroupName $ResourceGroupName -Name $virtualMachineName -ErrorAction Stop</code></pre>
<p>Then it applies the HUB licensing as shown below.</p>
<pre><code>$currentVm.LicenseType = "Windows_Server"</code></pre>
<p>Lastly, it updates the VM object.</p>
<pre><code>Update-AzureRmVM -VM $currentVm -ResourceGroupName $ResourceGroupName</code></pre>
<p><strong>Note</strong>: As a best practice, the script also checks whether the HUB Licensing is already applied or not. If the HUB licensing is already there on the VM then the Script does applies it again. There may be small downtime when the VM is updating and therefore a short downtime should be planned when doing this operation. </p>
<p>The script encapsulates all this logic into a reusable function. It also provides the best practices template to invoke this function.
<strong>Note</strong>: Check the comments marked with &quot;<strong>ToDo</strong>&quot; where you should be making the changes. </p>
<p>You can find more related details in official documentation here: <a href="https://docs.microsoft.com/en-us/azure/virtual-machines/windows/hybrid-use-benefit-licensing#convert-an-existing-vm-using-azure-hybrid-benefit-for-windows-server" target="_blank">Azure Hybrid Benefit for Windows Server</a></p>
<h3>Location of the Script</h3>
<p>You can find this script in GitHub at this location: <a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Azure%20VM%20Operations" target="_blank">Apply-HubLicensing.ps1</a></p>]]></description>
<link>https://HarvestingClouds.com/post/script-sample-vm-operations-apply-hub-licensing-to-existing-vms</link>
<pubDate>Fri, 07 Dec 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Script Sample - VM Operations - Change Azure VM Size</title>
<description><![CDATA[<p>If you want to change the size of multiple Azure VMs in your environment, then you can leverage this script sample to update the sizes in one go. You will want to change the size of the VMs periodically after thoroughly reviewing the consumption. This can optimize your environment and can save you money in the process as well. </p>
<h3>Script Requirements</h3>
<p>The script requires you to populate the configurations CSV file. The sample CSV file is also provided along with the script. This configurations file should have the following columns:</p>
<ol>
<li>Computer - Name of the VM in Azure</li>
<li>OSType - Windows or Linux</li>
<li>ResourceGroupName - Resource Group of the VM</li>
<li>SizeConversionNeeded - yes or no</li>
<li>NewVMComputeSize - new size of the VM. e.g. Standard_DS4_v2. This should conform to the VM size names. For Windows VMs these names can be found by navigating to one of the sizes related link here: <a href="https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes" target="_blank">Sizes for Windows virtual machines in Azure</a></li>
</ol>
<h3>Script Working</h3>
<p>The script works in 3 simple steps. In the first step the script fetches the Azure VM using Get-AzureRmVM cmdlet and ensures that it exists. </p>
<pre><code>$currentVm = Get-AzureRmVM -ResourceGroupName $ResourceGroupName -Name $virtualMachineName -ErrorAction Stop</code></pre>
<p>The second step is to update the size property for the Hardware profile of the vm object.</p>
<pre><code>$currentVm.HardwareProfile.VmSize = $vmSize </code></pre>
<p>Lastly, the script updates the VM using the below command.</p>
<pre><code>Update-AzureRmVM -VM $currentVm -ResourceGroupName $ResourceGroupName</code></pre>
<p><strong>Note</strong> that there will be a restart on the VM during the size conversion. Do plan for an outage although very minimal. Do keep some buffer in the outage window to account for any unforeseen issues that may occur during the conversion. I have converted various VMs and have never encountered any issues. Still, when I plan for an outage, as a best practice I do plan for a larger window. </p>
<p>The script encapsulates all this logic into a reusable function. It also provides the best practices template to invoke this function.
<strong>Note</strong>: Check the comments marked with &quot;<strong>ToDo</strong>&quot; where you should be making the changes. </p>
<h3>Location of the Script</h3>
<p>You can find this script in GitHub at this location: <a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Azure%20VM%20Operations" target="_blank">Change-AzureVMSize.ps1</a></p>]]></description>
<link>https://HarvestingClouds.com/post/script-sample-vm-operations-change-azure-vm-size</link>
<pubDate>Mon, 03 Dec 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Script Sample - VM Operations - Convert VM to Managed Disk VM</title>
<description><![CDATA[<p>If you want to convert multiple VMs from older Storage Account disks to Managed Disks you can leverage the automation capabilities provided by Microsoft via PowerShell. This script sample uses these capabilities in a managed and reusable way.</p>
<h3>Script Requirements</h3>
<p>The script requires you to populate the configurations CSV file. The sample CSV file is also provided along with the script. This configurations file should have the following columns:</p>
<ol>
<li>Computer - Name of the VM in Azure</li>
<li>OSType - Windows or Linux</li>
<li>ResourceGroupName - Resource Group of the VM</li>
<li>ManagedDiskConversion - yes or no. Set this to yes for the VMs for which you want to perform the conversion from older disk type to Managed Disk VMs</li>
</ol>
<h3>Script Working</h3>
<p>The process of Managed Disk conversion is different for a VM with Availability Set and one without it. For this reason the script first checks for the Availability Set of the VM.</p>
<pre><code>$avSet = Get-AzureRmAvailabilitySet -ResourceGroupName $rgName -Name $asName -ErrorAction Stop</code></pre>
<p>It then checks and sets the <strong>Managed</strong> flag on the Availability Set by using the following command.</p>
<pre><code>Update-AzureRmAvailabilitySet -AvailabilitySet $avSet -Managed -ErrorAction Stop</code></pre>
<p>It then stops the VM, converts it to Managed Disk Vm and then waits for 600 seconds. The conversion is performed by following command.</p>
<pre><code>ConvertTo-AzureRmVMManagedDisk -ResourceGroupName $rgName -VMName $vm.Name</code></pre>
<p><strong>Note</strong>: The script does not start the VM to avoid unnecessary costs. E.g. if the VM was stopped to begin with and you didn't want to start it. Unnecessary starting of a VM will incur you charges that you do not want. You can still update the script to start the VMs by simply adding the cmdlet for <strong><em>Start-AzureRmVM</em></strong> if you want.</p>
<p>The script encapsulates all this logic into a reusable function. It also provides the best practices template to invoke this function.
<strong>Note</strong>: Check the comments marked with &quot;<strong>ToDo</strong>&quot; where you should be making the changes. </p>
<h3>Location of the Script</h3>
<p>You can find this script in GitHub at this location: <a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Azure%20VM%20Operations" target="_blank">Convert-VMToManagedDisk.ps1</a></p>]]></description>
<link>https://HarvestingClouds.com/post/script-sample-vm-operations-convert-vm-to-managed-disk-vm</link>
<pubDate>Sun, 02 Dec 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Script Sample - VM Operations - Setting up the VM Backup</title>
<description><![CDATA[<p>If you want to set up VM Backup for multiple VMs in your environment you can leverage this script. This script lets you provide a CSV file as an input to the script and configures the Backup on all the VMs as per the configurations.</p>
<h3>Script Requirements</h3>
<p>The script requires you to populate the configurations CSV file. The sample CSV file is also provided along with the script. This configurations file should have the following columns:</p>
<ol>
<li>Computer - Name of the VM in Azure</li>
<li>OSType - Windows or Linux</li>
<li>ResourceGroupName - Resource Group of the VM</li>
<li>EnableBackup - yes or no</li>
<li>RecoveryServicesVaultName - Recovery Services Vault Name for the Backup</li>
<li>BackupProtectionPolicy - Backup Protection Policy name inside the Recovery Services Vault. E.g. Daily-30--Weekly-8--Monthly-6</li>
</ol>
<h3>Script Working</h3>
<p>The script first fetches the Azure Recovery Services vault using the below command.</p>
<pre><code>$recoveryServicesVault = Get-AzureRmRecoveryServicesVault -Name $RecoveryServicesVaultName</code></pre>
<p>It then uses this information to set the Recovery Services Vault Context as shown below.</p>
<pre><code>$recoveryServicesVault | Set-AzureRmRecoveryServicesVaultContext -ErrorAction Stop</code></pre>
<p>It then fetches the container for the VM Backup using below command. Through this, it checks if the Backup is already configured on the VM or not. If it is already configured then no further action is taken. </p>
<pre><code>$namedContainerCheck = Get-AzureRmRecoveryServicesBackupContainer -ContainerType "AzureVM" -Status "Registered" -FriendlyName $virtualMachineName</code></pre>
<p>The Backup policy is fetched next:</p>
<pre><code>$policy = Get-AzureRmRecoveryServicesBackupProtectionPolicy -WorkloadType "AzureVM" | where {$_.Name -eq $BackupProtectionPolicy}</code></pre>
<p>Finally, the Backup is configured on the VM using the below command.</p>
<pre><code>Enable-AzureRmRecoveryServicesBackupProtection -Policy $policy -Name $virtualMachineName -ResourceGroupName $ResourceGroupName -ErrorAction Stop</code></pre>
<p>Once the Backup is successfully configured, as a best practice, we need to trigger an initial Backup of the VM. The same is performed by the below commands.</p>
<pre><code>Write-Host "Fetching the Recovery Services Backup Container"
$namedContainer = Get-AzureRmRecoveryServicesBackupContainer -ContainerType "AzureVM" -Status "Registered" -FriendlyName $virtualMachineName
Write-Host "Fetching the Recovery Services Backup Item"
$item = Get-AzureRmRecoveryServicesBackupItem -Container $namedContainer -WorkloadType "AzureVM"
Write-Host "Triggering a Backup on the VM"
$job = Backup-AzureRmRecoveryServicesBackupItem -Item $item</code></pre>
<p>The script encapsulates all this logic into a reusable function. It also provides the best practices template to invoke this function.
<strong>Note</strong>: Check the comments marked with &quot;<strong>ToDo</strong>&quot; where you should be making the changes. </p>
<h3>Location of the Script</h3>
<p>You can find this script in GitHub at this location: <a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Azure%20VM%20Operations" target="_blank">Enable-VMBackup.ps1</a></p>]]></description>
<link>https://HarvestingClouds.com/post/script-sample-vm-operations-setting-up-the-vm-backup</link>
<pubDate>Fri, 23 Nov 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Script Sample - Checking if the Prompt for current script is Elevated or not</title>
<description><![CDATA[<p>Multiple times I have run into scenarios where the end users were reporting issues with the script but eventually, it turned out to be an access issue. The script required to be executed from an Elevated console (i.e. Run As Administrator) but the end user was trying to execute it as a normal user. Wouldn't it be nice if the script can check itself if it is being executed from an Elevated prompt or not and would report the same as a requirement.</p>
<p>This script sample can be reused in multiple scenarios and in multiple scripts. It will ensure that the script is executed only via an elevated prompt or else it will throw error with relevant details for the end user to take corrective action. </p>
<h3>Script workings</h3>
<p>The script first fetches the current Windows security principal by using the below commands:</p>
<pre><code>$WindowsID = [System.Security.Principal.WindowsIdentity]::GetCurrent()
$WindowsPrincipal = New-Object System.Security.Principal.WindowsPrincipal($WindowsID)</code></pre>
<p>It then fetches the Administrator Role related details using below commands:</p>
<pre><code>$adminRole = [System.Security.Principal.WindowsBuiltInRole]::Administrator</code></pre>
<p>Now that it have both the required information, the script checks if the current Windows Principal is part of the Administrator role or not by using below condition:</p>
<pre><code>if ($WindowsPrincipal.IsInRole($adminRole))
{
    return $True
}
else
{
    return $False
}</code></pre>
<p>The <strong>complete script sample</strong> turns this into a reusable logic and provides a template for using this with other code. </p>
<h3>Location of the Script</h3>
<p>You can find this script in GitHub at this location: <a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Get-IsElevated" target="_blank">Get-IsElevated.ps1</a></p>]]></description>
<link>https://HarvestingClouds.com/post/script-sample-checking-if-the-prompt-for-current-script-is-elevated-or-not</link>
<pubDate>Mon, 19 Nov 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Script Sample - Creating Multiple Resource Groups with RBAC Role Assignment</title>
<description><![CDATA[<p>You can automate the creation of the <strong>Resource Groups</strong> and also assign <strong>RBAC roles</strong> to these Resource Groups. This script sample shows you how you can accomplish this. It serves as a starting point for the automation of the resource group creation. You can modify the script as per your requirements.</p>
<h3>How the Script works</h3>
<p>The script takes the names of the Resource groups to be created as an array. You can modify the script to take this input from a CSV file along with other parameters like the location of the Resource Group and the related role assignments. </p>
<p>The script uses the below command to create a Resource Group.</p>
<pre><code>New-AzureRmResourceGroup -Name $rg -Location eastus</code></pre>
<p>Note that the location is hard-coded in the above command. This can also be parameterized. </p>
<p>It then uses the below command to make the role assignments on this Resource Group.</p>
<pre><code>New-AzureRmRoleAssignment -ObjectId $ADGroup01.Id -RoleDefinitionName "Contributor" -ResourceGroupName $rg</code></pre>
<p>Note that you can use the same command to make the assignments for a group, user or an app registration. You just need to provide the appropriate Id for the ObjectId parameter. </p>
<h3>Location of the Script</h3>
<p>You can find this script in GitHub at this location: <a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Create-ResourceGroupsWithRBACAssignments" target="_blank">Create-ResourceGroupsWithRBACAssignments.ps1</a></p>]]></description>
<link>https://HarvestingClouds.com/post/script-sample-creating-multiple-resource-groups-with-rbac-role-assignment</link>
<pubDate>Thu, 15 Nov 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Script Sample - Set Tags on Azure Resources</title>
<description><![CDATA[<p>Earlier we fetches the Azure resources report with the Tagging related information. You can review that script to fetch the report here: <a href="https://HarvestingClouds.com/post/script-sample-generate-azure-resources-report-by-tags-v30/" target="_blank">Script Sample - Generate Azure Resources Report by Tags v3.0</a></p>
<p>The output CSV generate by that script can be updated. It can be corrected for missing tag values or incorrect tag values, etc. The updated csv then can become an Input to the script sample discussed in this blog. This script will read from the updated CSV file and will apply the tags back to the Azure.</p>
<p><strong>Note</strong>: This script takes lot of time because the underlying cmdlet takes lot of time to set the tags (i.e. &quot;<strong><em>Set-AzureRmResource</em></strong>&quot; cmdlet). So it is advised that you filter to only the resources for which you want to update the tags.</p>
<h3>Script Workings</h3>
<p>As mentioned earlier, this script inputs the CSV file generated earlier. The schema for this input file can be checked from the output of Get script from the link mentioned above. </p>
<p>The script then connects to Azure and updates the Tags. If the Tags were not present already then it adds the tagging information. It uses the below command to update the tags.</p>
<pre><code>Set-AzureRmResource -Tag $r.Tags -ResourceId $r.ResourceId -Force</code></pre>
<h3>Location of the Script</h3>
<p>You can find this script in GitHub at this location: <a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Tagging%20Reports" target="_blank">Set-AzureRmTags.ps1</a></p>]]></description>
<link>https://HarvestingClouds.com/post/script-sample-set-tags-on-azure-resources</link>
<pubDate>Wed, 07 Nov 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Script Sample - Generate Azure Resources Report by Tags v3.0</title>
<description><![CDATA[<p>This is version 3.0 of an older script which generates the report for Azure resources by Tags. This is one of the most important script that ensures that you are compliant in your environment. You can pull reports on the CSV generated by this script to see which resources are missing tags or which resources do not have the correct tags applied.</p>
<p><strong>What is updated in this version</strong>: This version now can parse the Tags. It even factors for the spacing between the tag names. You will need to tweak the script to parse your own tags. Sample is provided in the script. You will need to modify the lines between 87 and 145 in the lined script. </p>
<p>You can still refer the older script here along with the details mentioned in that post: <a href="https://HarvestingClouds.com/post/script-sample-generate-azure-resources-report-by-tags/" target="_blank">Script Sample - Generate Azure Resources Report by Tags</a></p>
<h3>Location of the Script</h3>
<p>You can find this script in GitHub at this location: <a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Tagging%20Reports" target="_blank">Get-AzureRmTagsReport - v3.0.ps1</a></p>
<h3>Next Steps - Setting the updated Tags</h3>
<p>Now, what do you do witht he output CSV report that you got from running this script. You will update this csv file by making the corrections. If any resources will be missing any tags then you will add the same. After that this CSV file will become an input to another script, which will Set the changes for the Tags back to Azure resources. This script can be found here: <a href="https://HarvestingClouds.com/post/script-sample-set-tags-on-azure-resources/" target="_blank">Script Sample - Set Tags on Azure Resources</a>.</p>]]></description>
<link>https://HarvestingClouds.com/post/script-sample-generate-azure-resources-report-by-tags-v30</link>
<pubDate>Sat, 03 Nov 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Script Sample - Getting Azure Resource Reports</title>
<description><![CDATA[<p>This post is for two script samples in one. You can fetch reports on Azure resources with the script sample in this post. The two scripts in this post are:</p>
<ol>
<li><strong>Get-AzureRmAllResourceReport</strong> - which fetches the report for all Azure resources in all subscriptions</li>
<li><strong>Get-AzureRmResourceReportForOneResourceGroup</strong> - which fetches the report for Azure resources in a particular Resource Group</li>
</ol>
<h3>Script Inputs</h3>
<p>All you need to do is to provide the path for the output csv file. In the second script you also need to provide the name of the Resource Group.</p>
<p>The script will iterate all Azure resources and will provide an output csv file. For Azure VM and SQL the script will also provide additional information. </p>
<h3>Location of the Script</h3>
<p>You can find this script in GitHub at this location: <a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Get-AzureRmResourceReport" target="_blank">Scripts to Fetch Azure Resource Reports</a></p>]]></description>
<link>https://HarvestingClouds.com/post/script-sample-getting-azure-resource-reports</link>
<pubDate>Thu, 25 Oct 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Script Sample - Apply RBAC Role to Users on Resources</title>
<description><![CDATA[<p>If you have to assign a role to users onto multiple resources, this script can reduce your workload and can do the heavy lifting for you. Simply provide the inputs and you can provide access to the users at any scope i.e. Subscriptions, Resource Group or individual resource. </p>
<p>Currently, the script factors in Virtual Machines as individual resources, but these can be replaced by any resource type. </p>
<h3>Inputs and Script Requirements</h3>
<p>The script takes in the following inputs:</p>
<ol>
<li>csvLocation - this is the CSV containing the name of the resources on which you want to provide the role-based access. An example CSV is also provided with the script. It is simply two column CSV. The first column being the VM Name (or resource name if you want to generalize) and the second column is the Resource Group containing that resource.</li>
<li>role - This is the role that you want to assign. E.g. &quot;Virtual Machine Operator&quot;</li>
<li>Scope - this can be one of the 3 values viz. &quot;VirtualMachine&quot;, &quot;Subscription&quot; or &quot;ResourceGroup&quot;</li>
<li>Usernames - this can be an array of the user names who will be assigned the access on the resources</li>
<li>Groupnames - if groups need to be assigned access then they can be mentioned here</li>
<li>RBACOperatorFlag - a boolean value with the default value of true. You set this to false when you are making changes and do not want the script to perform any actions. You do not need to worry about this parameter for most scenarios and can ignore this. </li>
</ol>
<h3>Example Execution</h3>
<p>Example execution of the script will look like below. </p>
<pre><code>Apply-RBACRoleToResources -csvLocation "C:\Users\aman\Documents\AzureVM.csv" -role "Virtual Machine Operator" -Scope "VirtualMachine" -UserNames "user1,user2" -GroupNames "abac" </code></pre>
<p>The above command will read the VM names and their related REsource Groups form the AzureVM.csv file. This will assign &quot;Virtual machine operator&quot; role at the Virtual Macine level. This role will be assigned to the user1 and user2 users along with abac Group name. </p>
<h3>Location of the Script</h3>
<p>You can find this script in GitHub at this location: <a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Apply-RBACRoleToResources" target="_blank">Apply-RBACRoleToResources.ps1</a></p>]]></description>
<link>https://HarvestingClouds.com/post/script-sample-apply-rbac-role-to-users-on-resources</link>
<pubDate>Mon, 22 Oct 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Script Sample - Check for Pending Reboots on various VMs in your environment</title>
<description><![CDATA[<p>Often times you will need to check if there are any VMs/Servers in your environment which have Pending Reboot on them. You will want to schedule and perform a planned reboot on these VMs after going through proper processes.</p>
<p>The script sample in this post is about checking multiple Vms in your environment and check if there is any Pending Reboot on any of the VMs. The script uses WMI queries to check if there is any pending reboot on the machine. </p>
<p>This script was not authored by me. I received it from one of my colleagues and I couldn't track the original author of this script. Nevertheless, this is a very useful script for system administrators. </p>
<h3>Script Usage</h3>
<p>The script can be used as shown below:</p>
<pre><code>C:\Users\aman\Downloads&gt; import-module .\Get-PendingReboot.ps1
C:\Users\aman\Downloads&gt; $Servers = Get-Content C:\Users\aman\Downloads\Servers.txt
C:\Users\aman\Downloads&gt; Get-PendingReboot -Computer $Servers | Export-Csv C:\Users\aman\Downloads\PendingRebootReport.csv -NoTypeInformation</code></pre>
<p>In the 2nd command above, the Server.txt file should simply have the list of names of the computers in your environment with one computer name per line. </p>
<h3>Location of the Script</h3>
<p>You can find this script in GitHub at this location: <a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Get-PendingReboot" target="_blank">Get-PendingReboot.ps1</a></p>]]></description>
<link>https://HarvestingClouds.com/post/script-sample-check-for-pending-reboots-on-various-vms-in-your-environment</link>
<pubDate>Mon, 15 Oct 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Script Sample - Apply Locks on Various Azure Resources</title>
<description><![CDATA[<p>Locks is a very important but very less known feature in Azure. This feature is available for all resources in Azure. This prevents unintended operation on a particular resource. </p>
<p>You have two types of locks in Azure:</p>
<ol>
<li><strong>ReadOnly</strong> - You won't' be able to alter any configuration of the resource</li>
<li><strong>DoNotDelete</strong> - You will be able to add configurations but will not be able to remove configurations or even delete the resource</li>
</ol>
<p><strong>Do Not Delete</strong> is the lock, that as a best practice, you should apply on all critical resources in the environment. Once this lock is there on the resources, even the global administrator will not be able to delete the resources. The only way to delete the resources will be to delete the lock first and then delete the resources. </p>
<h3>The Script Sample Details</h3>
<p>This script sample leverages this concept of locks and uses the below cmdlet to apply the locks on various critical resources in the environment. </p>
<pre><code>New-AzureRmResourceLock -LockLevel CanNotDelete -LockName DoNotDelete -ResourceName $vNet.Name -ResourceType $vNet.Type -ResourceGroupName $vNet.ResourceGroupName -LockNotes "Do Not Delete Lock" -Confirm -Force</code></pre>
<p>The above command uses <strong><em>New-AzureRmResourceLock</em></strong> cmdlet to create the Do Not Delete lock on a Virtual Network. </p>
<p>This script iterates through all subscriptions that your account has access to and then applies the lock to all resources of type:</p>
<ol>
<li>Virtual Network</li>
<li>Route Tables</li>
<li>Express Routes</li>
<li>Virtual Network Gateways</li>
<li>Virtual Network Gateway Connections</li>
<li>Recovery Services Vaults (i.e. ASR Vaults)</li>
</ol>
<h3>Location of the Script</h3>
<p>You can find this script in GitHub at this location: <a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Apply-LocksOnVariousAzureResources" target="_blank">Apply-LocksOnVariousAzureResources.ps1</a></p>]]></description>
<link>https://HarvestingClouds.com/post/script-sample-apply-locks-on-various-azure-resources</link>
<pubDate>Thu, 11 Oct 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Script Sample - Azure Automation - Check VM Availability</title>
<description><![CDATA[<p>One of the most common scenarios is to check if a VM is available or not. If you want to connect to a VM and take some actions, it is a best practice to first check if the VM is available and responsive or not. This script sample performs multiple checks. You can tweak these checks to include only the ones that you need. </p>
<p>Also, there are some pieces of information that you can get only from inside the VM. E.g. the domain to which a VM is connected. This scripts ensures that the script is available. If it is available then it can also fetch such pieces of information for you. </p>
<h3>What does the Script check</h3>
<p>The script makes multiple checks which include:</p>
<ol>
<li>Ping check using the <strong><em>Test-Connection</em></strong> cmdlet</li>
<li>PS Remoting checking by trying to open a PS Session using <strong><em>New-PSSession</em></strong> cmdlet</li>
<li>The script also tries to connect to the WMI and list various object using <strong><em>Get-Wmiobject</em></strong> cmdlet</li>
<li>Similar to the PS Remoting check it also checks for WS Man by using the <strong><em>Test-WSMan</em></strong> cmdlet</li>
</ol>
<p>It can then fetch VM related information by either remoting into it or by using WMI. E.g. it can fetch domain of the VM by using below WMI cmdlet.</p>
<pre><code>$Domain = $wmi.GetStringValue($HKLM, $MachineKey, "NV Domain").sValue</code></pre>
<h3>Script Inputs and Requirements</h3>
<p>The script only takes below two input parameters:</p>
<ol>
<li>NameOfTheVM</li>
<li>IPOfVM</li>
</ol>
<p>The script requires a credential object with the name as &quot;<strong><em>AutomationCredentialName</em></strong>&quot;. These credential should have login and remoting access into the VM. </p>
<h3>Location of the Script</h3>
<p>You can find this script in GitHub at this location: <a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Azure%20Automation%20-%20Check%20VM%20Availability" target="_blank">Check-VMAvailability.ps1</a></p>]]></description>
<link>https://HarvestingClouds.com/post/script-sample-azure-automation-check-vm-availability</link>
<pubDate>Fri, 05 Oct 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Script Sample - Azure Automation - Get VM Information from actual Azure Virtual Machine</title>
<description><![CDATA[<p>Many times, when automating your infrastructure, you will need to work with Azure Virtual Machines. Based on just the name of the VM and it's Resource Group, you will need to know about it's Network Interfaces and it's IP address. </p>
<p>This Runbook sample connects to Azure using Service Principal and fetches various VM properties. The Runbook can be tweaked to fetch more or less information as per the requirements. </p>
<h3>Script Input and Requirements</h3>
<p>The script takes the below inputs:</p>
<ol>
<li>VM Name - name of the VM </li>
<li>ResourceGroupNameofVM - name fo the Resource Group in which the VM exists</li>
<li>Azure Tenant Id - Id of the Tenant. This is the Azure Active Directory's Id which can be found in Azure AD properties in the portal</li>
<li>Subscription Id - Id of the Subscription in which the VM exists</li>
</ol>
<p>The VM also requires a credential Object i.e. a Service Pricipal, which will have access to the Virtual machine. </p>
<h3>Working of the Script</h3>
<p>The script uses the below command to log into the Azure using the Service Principal.</p>
<pre><code>$AzureRMConn = Login-AzureRmAccount -ServicePrincipal -Credential $Cred -TenantId $AzureTenantId -ErrorAction SilentlyContinue -ErrorVariable LoginError</code></pre>
<p>If the connection is successful then the Subscription is selected using below cmdlet.</p>
<pre><code>$ConnSubs = Select-AzureRmSubscription -SubscriptionId $SubscriptionID</code></pre>
<p>Then the actual VM information is fetched using below cmdlet.</p>
<pre><code>$VM = Get-AzureRmVM -ResourceGroupName $VMResourceGroup -Name $VMName</code></pre>
<p>This fetches basic VM information. To fetch detailed information regarding the network interface of the VM, below cmdlets are used.</p>
<pre><code>$nicId = $VM.NetworkProfile.NetworkInterfaces[0].Id
$VMNetowrkInterface = Get-AzureRmNetworkInterface -ResourceGroupName $VMResourceGroup -Name $nicId.Split('/')[8]
$VMMainIPConfig = $VMNetowrkInterface | Get-AzureRmNetworkInterfaceIpConfig
$strVMMainNicName = $VMMainIPConfig.Name
$strVMMainNicIPAddress = $VMMainIPConfig.PrivateIpAddress</code></pre>
<p>The first cmdlet above fetches the Id of the first Network Interface on the VM. Then the second cmdlet uses this Id to fetch the actual network interface. The third cmdlet uses Get-AzureRmNetwrokInterfaceIpConfig to fetch the IP configuration on this network interface. Then last two lines use this Ip configuration to fetch the Private IP address on the VM. </p>
<h3>Location of the Script</h3>
<p>You can find this script in GitHub at this location: <a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Azure%20Automation%20-%20Get%20VM%20Information%20From%20Azure" target="_blank">Get-VMInfoFromAzure.ps1</a></p>]]></description>
<link>https://HarvestingClouds.com/post/script-sample-azure-automation-get-vm-information-from-actual-azure-virtual-machine</link>
<pubDate>Mon, 24 Sep 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Script Sample - Azure Automation - Sending Email Notification</title>
<description><![CDATA[<p>Sending email notifications from Azure Automation is a requirement that you will encounter a lot when working with any automation scenario. You may want to send an email notification on the completion of the job or on the failure of the job. </p>
<p>This script sample helps you bootstrap the Runbook for you. </p>
<h3>Script Inputs and Requirements</h3>
<p>The script takes the following <strong>input parameters</strong>:</p>
<ol>
<li>SMTP Server URL</li>
<li>SMTP Server Port</li>
<li>User Credentials - Boolean value to check if user credentials are required or not</li>
<li>Credential Name - Credential name for the credential asset in your Azure Automation Account</li>
<li>EnableSsl - Boolean value to select if you want to enable SSL or not</li>
<li>Email From - From address for the email</li>
<li>Email To - Receipient of the email</li>
<li>Email Subject - Subject of the email
9 Message Body - This can be html. So you can compose your email to make it look more professional</li>
</ol>
<p>The script also requires you to have a Credential asset in your Azure Automation Account. It's name is specified in the 4rth parameter above. This credential is used to connect to the SMTP server. In the sample this smtp server is an O365 email account. </p>
<h3>Script Working</h3>
<p>The script uses a SMTP Client to send the email. This client is created using the below cmdlet.</p>
<pre><code>$SmtpClient = New-Object System.Net.Mail.SmtpClient $SMTPServerUrl, $SMTPServerPort</code></pre>
<p>where $SMTPServerUrl is the URL of the SMTP server and $SMTPServerPort is the port number used by the SMTP server.</p>
<p>Later, this client is used to send the email by using it's <strong><em>send</em></strong> function as below. </p>
<pre><code>$SmtpClient.Send($Message)</code></pre>
<h3>Location of the Script</h3>
<p>You can find this script in GitHub at this location: <a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Azure%20Automation%20-%20Send%20Email%20Notification" target="_blank">Send-EmailNotificationFromAzureAutomationRunbook.ps1</a></p>]]></description>
<link>https://HarvestingClouds.com/post/script-sample-azure-automation-sending-email-notification</link>
<pubDate>Sat, 22 Sep 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Script Sample - Azure Automation - Get VM Information from ASR Recovery Plan Context</title>
<description><![CDATA[<p>This script sample is to parse the Recovery Plan Context from Azure Site Recovery. When an Azure Automation Runbook is invoked directly from Azure Site Recovery, it passes an object called <strong>Recovery Plan Context</strong>.  This has all the crucial information regarding the failover operation like:</p>
<ol>
<li>Recovery Plan Name</li>
<li>Failover Type</li>
<li>Failover DIrection</li>
<li>Subscription Name</li>
<li>Resource Group Name</li>
<li>VM Name</li>
</ol>
<h3>Sample Recovery Plan Context</h3>
<p>A sample Recovery Plan Context looks like below. Please note that this is formatted appropriately here. In the Runbook it will look flattened i.e. instead of multiple lines, the context will be contiguous and will be presented in a single long line. Its data type is Object when coming from ASR Recovery Plan.</p>
<pre><code>{
   "RecoveryPlanName":"RecoveryPlan-Test",
   "FailoverType":"Test",
   "FailoverDirection":"PrimaryToSecondary",
   "GroupId":"Group1",
   "VmMap":{
      "aaaaaa-1111-1111-1111-aaaaaaaaaaa":{
         "SubscriptionId":"bbbbbbbb-2222-2222-2222-bbbbbbbbbbb",
         "ResourceGroupName":"rg-dr-test-asr",
         "CloudServiceName":null,
         "RoleName":"VMName-test",
         "RecoveryPointId":"cccccccccc-3333-3333-3333-cccccccccccc",
         "RecoveryPointTime":"\/Date(1550773609103)\/"
      }
   }
}</code></pre>
<h3>How does this Script work</h3>
<p>The script first uses &quot;<strong><em>ConvertFrom-Json</em></strong>&quot; cmdlet to parses the object to valid PowerShell object. It then uses the VMMap property of the Recovery Plan Context object to get various pieces of the information. </p>
<p>The script also uses a <strong><em>foreach</em></strong> look to factor for multiple Vms in the Recovery Plan. It will <strong>output</strong> an array of the VMs with the required information, which can then be consumed in your logic to perform actual functions on the VMs. E.g. You can then connect to the VM and install some extensions on it. </p>
<h3>How to Invoke This Runbook/Script</h3>
<p>This script can be directly invoked from Azure Site Recovery (ASR)'s Recovery Plan or can be invoked from a master Runbook. It is a best practice to invoke this Runbook from a master Runbook. A sample for this Runbook is provided and discussed here: <a href="https://HarvestingClouds.com/post/script-sample-azure-automation-runbook-for-asr-recovery-plan/" target="_blank">Script Sample - Azure Automation - Runbook for ASR Recovery Plan</a></p>
<h3>Location of the Script</h3>
<p>You can find this script in GitHub at this location: <a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Azure%20Automation%20-%20ASR%20Recovery%20Plan" target="_blank">ASR-Get-VMSInfoFromASRContext.ps1</a></p>]]></description>
<link>https://HarvestingClouds.com/post/script-sample-azure-automation-get-vm-information-from-asr-recovery-plan-context</link>
<pubDate>Wed, 12 Sep 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Script Sample - Azure Automation - Runbook for ASR Recovery Plan</title>
<description><![CDATA[<p>When you are automating Azure Site Recovery, you will be working with <strong>Azure Automation Runbooks</strong>. You will invoke these from within the <strong>Azure Site Recovery Plan</strong>.  The input parameter to this Runbook will be a <strong>Recovery Plan Context</strong>, that will automatically be passed by the ASR Recovery Plan. This Recovery plan context will contain information like: </p>
<ol>
<li>Recovery Plan Name</li>
<li>Failover Type</li>
<li>Failover DIrection</li>
<li>Subscription Name</li>
<li>Resource Group Name</li>
<li>VM Name</li>
</ol>
<p>The Azure Automation Runbook should be able to receive this information and parse it. </p>
<h3>Best Practice and this Script Sample</h3>
<p>As a best practice, you should leverage the main Runbook to perform operational functions. E.g. If the automation that you want to trigger on Failover, requires connection to the company's internal network, then you would want it to run on a Hybrid worker (which will be connected to your internal network). As such, I recommend that the parsing of the recovery plan context should be done in another Runbook, which should be invoked by the main Runbook. </p>
<p>This sample PowerShell Runbook accepts the Recovery Plan context and passes it to the second Runbook. That Runbook may or may not run on a Hybrid worker. </p>
<h3>Sample Recovery Plan Context</h3>
<p>A sample Recovery Plan Context looks like below. Please note that this is formatted appropriately here. In the Runbook it will look flattened i.e. instead of multiple lines, the context will be contiguous and will be presented in a single long line. Its data type is Object when coming from ASR Recovery Plan. </p>
<pre><code>{
   "RecoveryPlanName":"RecoveryPlan-Test",
   "FailoverType":"Test",
   "FailoverDirection":"PrimaryToSecondary",
   "GroupId":"Group1",
   "VmMap":{
      "aaaaaa-1111-1111-1111-aaaaaaaaaaa":{
         "SubscriptionId":"bbbbbbbb-2222-2222-2222-bbbbbbbbbbb",
         "ResourceGroupName":"rg-dr-test-asr",
         "CloudServiceName":null,
         "RoleName":"VMName-test",
         "RecoveryPointId":"cccccccccc-3333-3333-3333-cccccccccccc",
         "RecoveryPointTime":"\/Date(1550773609103)\/"
      }
   }
}</code></pre>
<h3>Location of the Script</h3>
<p>You can find this script in GitHub at this location: <a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Azure%20Automation%20-%20ASR%20Recovery%20Plan" target="_blank">ASR-RunbookForRecoveryPlans.ps1</a></p>
<h3>Runbook for Parsing the Recovery Plan Context</h3>
<p>The Runbook for parsing the Recovery Plan Context is discussed here: <a href="https://HarvestingClouds.com/post/script-sample-azure-automation-get-vm-information-from-asr-recovery-plan-context/" target="_blank">Script Sample - Get VM's Information from ASR Context</a>. The Runbook/script in that link is the one that is being invoked by the Runbook discussed in this blog post. </p>]]></description>
<link>https://HarvestingClouds.com/post/script-sample-azure-automation-runbook-for-asr-recovery-plan</link>
<pubDate>Wed, 05 Sep 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Updating a Custom RBAC Role in Azure</title>
<description><![CDATA[<p>In one of the earlier blog, we saw how to add a custom role in Azure to manage Role Based Access Control (RBAC) at more granular level. You can review the earlier post here: <a href="https://HarvestingClouds.com/post/demystifying-azure-security-custom-rbac-roles/" target="_blank">Demystifying Azure Security - Custom RBAC Roles</a>.</p>
<p>In this post, we will see how to make quick changes to this custom role leveraging Azure PowerShell script.</p>
<h3>Making the updates</h3>
<p>Just like any other Azure PowerShell script, you will connect to Azure and select the right subscription by using the following cmdlets.</p>
<pre><code>Add-AzureRmAccount
Select-AzureRmSubscription -SubscriptionName $SubscriptionName</code></pre>
<p>Then you will fetch the current custom role by using the below cmdlet.</p>
<pre><code>$role = Get-AzureRmRoleDefinition $roleName</code></pre>
<p>Optionally, if you want to <strong>inspect the current role</strong>, then you can use below cmdlet to generate JSON file for the current custom role.</p>
<pre><code>Get-AzureRMRoleDefinition -Name $roleName | ConvertTo-Json</code></pre>
<p>Then you can make updates to the &quot;$role&quot; object. One such sample could be as outlined below which adds the action to allow stopping of the VMs and then updates the description of the custom role.</p>
<pre><code>$role.Actions.Add("Microsoft.Compute/virtualMachines/deallocate/action")
$role.Description = "Can monitor, Start, Stop and restart virtual machines."</code></pre>
<p>Finally, the role is updated back to the Azure portal by using the below cmdlet.</p>
<pre><code>Set-AzureRmRoleDefinition -Role $role</code></pre>
<h3>Location of the Complete Script</h3>
<p>You can find this script in GitHub at this location: <a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Update-CustomRoleSample" target="_blank">Update-CustomRoleSample.ps1</a></p>]]></description>
<link>https://HarvestingClouds.com/post/updating-a-custom-rbac-role-in-azure</link>
<pubDate>Wed, 29 Aug 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Script Sample - Set Azure Site Recovery (ASR) Prerequisite Services</title>
<description><![CDATA[<p>When you want to use Azure Site Recovery (ASR) either for Migration or for Disaster Recovery (DR), you will need to enable replication. Under the hood, this involves installation of the Mobility Services agent. This has few pre-requisites for some services to be enabled and set up appropriately. </p>
<p>In the last script sample, we saw how to check remote computers if these services are set correctly or not. That sample can be found here: <a href="https://HarvestingClouds.com/post/script-sample-check-azure-site-recovery-asr-prerequisite-services/" target="_blank">Script Sample - Check Azure Site Recovery (ASR) Pre-requisites</a>.</p>
<p>In this sample, we will see how to set these services correctly on the remote computers.</p>
<h3>How it works</h3>
<p>This script will query the remote computers and set the service status for the 3 ASR Required services. These 3 services are:</p>
<ol>
<li>Volume Shadow Copy(VSS) </li>
<li>COM+ System Application(COMSysApp)</li>
<li>Microsoft Distributed Transaction Coordinator Service (MSDTC)</li>
</ol>
<p>The script will set the startup type of the services to &quot;<strong><em>automatic</em></strong>&quot; and will then &quot;<strong><em>start</em></strong>&quot; the service. It uses the below cmdlets to perform these actions.</p>
<pre><code>Set-Service -name 'msdtc' -ComputerName $Computer -StartupType Automatic
Start-Sleep -s 5
Get-Service -Name 'msdtc' -ComputerName $computer| Start-Service</code></pre>
<p>The cmdlets to configure other two services is similar. </p>
<h3>Location of the Script</h3>
<p>You can find this script in GitHub at this location: <a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/ASR%20Pre-Requisites" target="_blank">Set-ASRPrerequisiteServivces.ps1 on GitHub</a></p>]]></description>
<link>https://HarvestingClouds.com/post/script-sample-set-azure-site-recovery-asr-prerequisite-services</link>
<pubDate>Thu, 23 Aug 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Script Sample - Check Azure Site Recovery (ASR) Prerequisite Services</title>
<description><![CDATA[<p>When you want to use Azure Site Recovery (ASR) either for Migration or for Disaster Recovery (DR), you will need to enable replication. Under the hood, this involves installation of the Mobility Services agent. This has few pre-requisites for some services to be enabled and set up appropriately. </p>
<p>If you have a large environment then you can do this in an automated fashion by leveraging this script sample. You can learn more about these services related requirements by checking the &quot;<strong><em>Checking Services</em></strong>&quot; section in here: <a href="https://HarvestingClouds.com/post/troubleshooting-azure-site-recovery-asr-data-replication-initiation-issues-part-2/" target="_blank">Troubleshooting Azure Site Recovery (ASR) - Data Replication Initiation Issues - Part 2</a></p>
<h3>How it works</h3>
<p>This script will query the remote computers and check for service status for the 3 ASR Required services, and export to CSV. These 3 services are:</p>
<ol>
<li>Volume Shadow Copy(VSS) </li>
<li>COM+ System Application(COMSysApp)</li>
<li>Microsoft Distributed Transaction Coordinator Service (MSDTC)</li>
</ol>
<p>The script uses WMI to fetch the data from the remote computer(s) and checks if these services are in the &quot;<strong><em>running</em></strong>&quot; state and whether or not the startup type is set to &quot;<strong><em>automatic</em></strong>&quot;. If both conditions are not met then the state is reported as &quot;<em>Set Incorrectly</em>&quot;. If both conditions are met then the result is reported as &quot;<em>Set Correctly</em>&quot;</p>
<h3>Location of the Script</h3>
<p>You can find this script in GitHub at this location: <a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/ASR%20Pre-Requisites" target="_blank">Check-ASRPrerequisiteServices.ps1 on GitHub</a></p>
<h3>Next Script  - Setting these services automatically</h3>
<p>Next, we will check the script sample to set these services appropriately. This sample can be found here: <a href="https://HarvestingClouds.com/post/script-sample-set-azure-site-recovery-asr-prerequisite-services/" target="_blank">Script Sample - Set Azure Site Recovery (ASR) Pre-requisite Services</a></p>]]></description>
<link>https://HarvestingClouds.com/post/script-sample-check-azure-site-recovery-asr-prerequisite-services</link>
<pubDate>Mon, 20 Aug 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Script Sample - Format EA Billing Usage Csv for Tags</title>
<description><![CDATA[<p>As a system administrator, you will need to work with Enterprise Agreement (EA) related billing data a lot. As such you will need to pull lots of reports based on this consumption and usage related csv file that you get from the EA portal i.e. at address: <a href="https://ea.azure.com" target="_blank"><a href="https://ea.azure.com">https://ea.azure.com</a></a>. Tagging is one of the important components. But it comes as a composite JSON in one column. You will want to split this into multiple columns, with one column for each tag. </p>
<p>The script sample in this blog performs exactly this parsing for you. You will need to tweak the script to include the tags as per your environment. </p>
<blockquote>
<p><strong>Note</strong>: The EA portal is only available to the EA customer. If you are not an EA customer, then if you try to login, you will get the following error:Invalid User: The account provided is not a valid user of the Microsoft Azure Enterprise Portal. Please sign in with a valid account. If you believe you have received this message in error, please contact Support.</p>
</blockquote>
<h3>Sample Input EA Azure Billing CSV</h3>
<p>The tags section in the CSV will look similar to below <strong>JSON format key-value pairs</strong> under the column for &quot;<strong>Tags</strong>&quot;:</p>
<pre><code>{  "ApplicationOwner": "aman@domain.com",  "BusinessUnit": "Infrastructure",  "ApplicationType": "Test",  "Department": "Infrastructure"}</code></pre>
<p>Now as a requirement, you will want to split this to separate columns for the Tags for &quot;Application Owner&quot;, &quot;Business Unit&quot;, &quot;ApplicationType&quot; and &quot;Department&quot;. </p>
<h3>How does the script parse this JSON</h3>
<p>The script parses the JSON by splitting it into multiple values and then saving the new object as an output CSV. Right now these tag values are hard coded into the script and needs to be tweaked as per your requirements. Look out for the names of the variables and output member names with same name as that of the Tags mentioned above. These are the ones that you will need to alter. </p>
<p>The script also takes into factor if the tag was added incorrectly with space in the name instead of pascal casing (i.e. name with no space between two words and first letter capital for each word).</p>
<h3>Location of the Script</h3>
<p>The script is located in the GitHub along with samples for Input and Output CSV files at this location: <a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Format-EABillingUsageCsvForTags" target="_blank">GitHub location for Format Billing CSV Script</a></p>
<blockquote>
<p><strong>Note</strong>: An alternative to using the billing related csv file is to leverage <strong>PowerBI</strong> and build dashboards by consuming the EA related billing APIs which are available out of the box. These are in preview at the time of the writing of this blog. Parsing tags is still an issue with these. We will cover these in one of the future blog post. </p>
</blockquote>]]></description>
<link>https://HarvestingClouds.com/post/script-sample-format-ea-billing-usage-csv-for-tags</link>
<pubDate>Wed, 08 Aug 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Script Sample - Export All OMS Log Analytics Saved Searches</title>
<description><![CDATA[<p>If you work with OMS Log Analytics then you end up working with lots of queries. To manage the most used queries specific to your environment you save these as Saved Searches. These can be reusable in other projects as well. Therefore, you will want to export these saved searches. </p>
<h3>How it works</h3>
<p>This is the most simple but very useful script sample. It exports by simply using the below cmdlet. You can use the below cmdlet alone as well instead of using the whole script, if you are already logged into Azure and have the right subscription selected.</p>
<pre><code>(Get-AzureRmOperationalInsightsSavedSearch -ResourceGroupName $ResourceGroupName -WorkspaceName $WorkspaceName).Value.Properties | ConvertTo-Json</code></pre>
<p>Note that it uses &quot;<strong><em>ConvertTo-Json</em></strong>&quot; cmdlet to export the Saved Searches to JSON output. You can later reuse this JSON to import these saved searches into a different environment as well. </p>
<h3>Location of the Script</h3>
<p>You can find this script in GitHub at this location: <a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Export-OMSLogAnalyticsSavedSearches" target="_blank">Export Log Analytics Saved Searches</a></p>]]></description>
<link>https://HarvestingClouds.com/post/script-sample-export-all-oms-log-analytics-saved-searches</link>
<pubDate>Wed, 25 Jul 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Script Sample - Export All Azure Automation Account Runbooks and Variables</title>
<description><![CDATA[<p>When you work with Azure Automation, you build lots of Runbooks over time. When the volume grows, you can't just export each and every Runbook one by one. Add to it the fact, if you have multiple Azure Automation accounts. It becomes a nightmare to download all the Runbooks one by one. </p>
<p>There are two ways to export all the Runbooks all at once. </p>
<h3>First method - Using Azure Automation PowerShell ISE Add-On</h3>
<p>You can install the Azure Automation PowerShell ISE Add-on and connect to your Azure Automation Account. Then you can download all the Runbooks locally and copy the same. This Add-On can be downloaded from GitHub or PowerShell gallery as per the instructions provided here: <a href="https://github.com/azureautomation/azure-automation-ise-addon" target="_blank">Azure Automation PowerShell ISE Add-On</a></p>
<h3>Second method - Using this script sample</h3>
<p>This script sample requires you to update the input variables for your environment i.e. related to your subscription, Azure Automation account name etc. Then it connects to the Azure using below cmdlet.</p>
<pre><code>Add-AzureRmAccount
Select-AzureRmSubscription -SubscriptionName $SubscriptionName</code></pre>
<p>Then it exports the Azure Automation Runbooks using below cmdlets. Here it first fetches all the Runbooks and then uses &quot;<em>Export-AzureRmAutomationRunbook</em>&quot; cmdlet to export these to a local folder.</p>
<pre><code>$AllRunbooks = Get-AzureRmAutomationRunbook -AutomationAccountName $AutomationAccountName -ResourceGroupName $ResourceGroupName
$AllRunbooks | Export-AzureRmAutomationRunbook -OutputFolder $OutputFolder</code></pre>
<p>Finally, it exports the variables in the environment by using the below cmdlets. Here it first fetches the variables and then uses &quot;Export-Csv&quot; cmdlet to export the variables to a csv file. </p>
<pre><code>$variables = Get-AzureRmAutomationVariable -AutomationAccountName $AutomationAccountName -ResourceGroupName $ResourceGroupName
$variablesFilePath = $OutputFolder + "\variables.csv"
$variables | Export-Csv -Path $variablesFilePath -NoTypeInformation</code></pre>
<h3>Location of the script</h3>
<p>This script is located in the Github here: <a href="https://github.com/HarvestingClouds/PowerShellSamples/tree/master/Scripts/Export-AllAutomationRunbooksAndVariables" target="_blank">Export Azure Automation Runbooks and Variables</a></p>]]></description>
<link>https://HarvestingClouds.com/post/script-sample-export-all-azure-automation-account-runbooks-and-variables</link>
<pubDate>Mon, 16 Jul 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Book published on Amazon - Quick and Practical Guide to ARM Templates</title>
<description><![CDATA[<p><strong>Quick and Practical Guide to ARM Templates</strong> is a very non-conventional book. The objective of the book is to help you &quot;Become Experts in Developing ARM Templates for Microsoft Azure without any prior knowledge&quot;. This book is for both a beginner and intermediate users. </p>
<p><strong>Update [03-27-2018]</strong>: This book is available for <strong>FREE</strong> for a limited time only (on 27, 28 and 29th March, 2018 till midnight Pacific Time). <a href="https://www.amazon.com/dp/B07C8LSBSN" target="_blank"><strong>Download Now!!!</strong></a></p>
<p>This is a quick and practical approach to learning ARM Templates for Azure. It covers only the most essential topics that you will need 95% of the time while working with ARM Templates. The goal is to have you working faster and quicker on ARM Templates without compromising any of the best practices. </p>
<p>The book assumes that you do not have prior knowledge of ARM Templates. If you have no development background then this book is for you. It starts by building the fundamentals on which ARM Templates are built.  The more practical approach means less theory and more focus on the practical aspects that can help you start working and delivering on ARM Templates.</p>
<p>You can view the book on Amazon.com here: <a href="https://www.amazon.com/dp/B07C8LSBSN">https://www.amazon.com/dp/B07C8LSBSN</a></p>
<p>You can also search for the book in your local Amazon marketplace.</p>
<img src="/images/15217860175ab49ca149064.png" alt="Quick and Practical Guide to ARM Templates">]]></description>
<link>https://HarvestingClouds.com/post/book-published-on-amazon-quick-and-practical-guide-to-arm-templates</link>
<pubDate>Tue, 27 Mar 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Getting Started with Virtual Machine Serial Console access</title>
<description><![CDATA[<p>This is the feature we all have been waiting for. This gives power back to administrators in the cloud. <strong>Virtual Machine Console Access</strong> is now available in Preview in Azure. The console is called <strong>Special Administrative Console</strong> (SAC). With this you can perform advanced troubleshooting like correcting firewall rules by fixing iptables, recovering filesystems, locking down the network, interacting and modifying the bootloader etc. </p>
<p><strong>Note</strong>: Windows images on Azure do not have <strong>Special Administrative Console</strong> (SAC) enabled by default. SAC is supported on server versions of Windows but is not available on client versions (e.g. Windows 10, Windows 8 or Windows 7).</p>
<h3>How to Enable SAC on Windows VM</h3>
<p>On a Linux VM, the serial access will be available but for Windows VM you need to perform additional steps in the VM to enable this. The steps to enable are:</p>
<ul>
<li>Login to your VM via Remote Desktop</li>
<li>
<p>From an Administrative command prompt run the following commands </p>
<p><code>bcdedit /ems {current} on</code></p>
<p><code>bcdedit /emssettings EMSPORT:1 EMSBAUDRATE:115200</code></p>
</li>
</ul>
<ul>
<li>Reboot the system for the SAC console to be enabled</li>
</ul>
<h3>How to Access</h3>
<p>Currently, this option (in Preview at the time of writing this blog) is available only via the portal. To access the Serial Console go to:</p>
<ol>
<li>The Virtual Machine (VM) for which you want to access the Serial Console</li>
<li>Ensure that the VM is up and running</li>
<li>Navigate to the  &quot;<strong>Serial console (Preview)</strong>&quot; option under &quot;Support + Troubleshooting&quot; section in VM settings</li>
</ol>
<p>If the VM is not up and running then you will view the below error.</p>
<pre><code>The serial console connection to the VM encountered an error: 'Bad Request'
 (400) - The VM is in a stopped deallocated state.  Please start the VM and retry the serial console connection.</code></pre>
<p>When the VM is starting you will see the below message, which you are connected to the Serial Console.</p>
<img src="/images/15222511335abbb57d69110.png" alt="Connecting to SAC">
<p>Once connected you can run commands like <code>ch -?</code> for information on using channels and simply type <code>?</code> for general help.</p>
<img src="/images/15222511405abbb5841a716.png" alt="Connected SAC and Help options">
<p><strong>Note</strong>: The console automatically will disconnect after a period of inactivity.</p>
<p>The access is being opened to more and more subscriptions and regions. If you don't see the access today, you just need to wait and it should be coming soon.</p>]]></description>
<link>https://HarvestingClouds.com/post/getting-started-with-virtual-machine-serial-console-access</link>
<pubDate>Mon, 26 Mar 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Demystifying Azure Security - Series Index</title>
<description><![CDATA[<p>Understanding the security is of utmost importance in designing any application architecture. When bringing your applications or infrastructure to Azure or even designing new applications in Azure, you need to be aware of all the ways you can make your application/design more secure by leveraging various features Azure has to offer.</p>
<p>This series talks about various aspects of <strong>security</strong> as related to different aspects of Azure.</p>
<p>This blog is an <strong>Index</strong> of various blogs in the series &quot;Demystifying Azure Security&quot;:</p>
<ol>
<li><a href="https://HarvestingClouds.com/post/demystifying-azure-security-rbac-roles/" target="_blank">Understanding RBAC Roles</a></li>
<li><a href="https://HarvestingClouds.com/post/demystifying-azure-security-custom-rbac-roles/" target="_blank">Custom RBAC Roles</a></li>
<li><a href="https://HarvestingClouds.com/post/demystifying-azure-security-just-in-time-vm-access/" target="_blank">Just In Time VM access</a></li>
<li><a href="https://HarvestingClouds.com/post/demystifying-azure-security-azure-policies-1-basics/" target="_blank">Azure Policies - Basics</a></li>
<li><a href="https://HarvestingClouds.com/post/demystifying-azure-security-azure-policies-2-assigning-a-policy/" target="_blank">Azure Policies - Assigning a Policy</a></li>
<li><a href="https://HarvestingClouds.com/post/demystifying-azure-security-creating-a-custom-policy-part-1-viewing-definition-of-an-existing-policy/" target="_blank">Creating a Custom Policy - Part 1 - Viewing Definition of an existing Policy</a></li>
<li><a href="https://HarvestingClouds.com/post/demystifying-azure-security-creating-a-custom-policy-part-2-understanding-the-policy-structure/" target="_blank">Creating a Custom Policy - Part 2 - Understanding the Policy Structure</a></li>
<li><a href="https://HarvestingClouds.com/post/demystifying-azure-security-creating-a-custom-policy-part-3-defining-your-custom-policy/" target="_blank">Creating a Custom Policy - Part 3 - Defining your Custom Policy</a></li>
<li><a href="https://HarvestingClouds.com/post/demystifying-azure-security-azure-policies-initiative-definitions/" target="_blank">Azure Policies - Initiative Definitions</a></li>
<li><a href="https://HarvestingClouds.com/post/demystifying-azure-security-azure-sql-database-transparent-data-encryption/" target="_blank">Azure SQL Database - Transparent Data Encryption</a></li>
<li><a href="https://HarvestingClouds.com/post/demystifying-azure-security-azure-sql-database-auditing-threat-detection/" target="_blank">Azure SQL Database - Auditing &amp; Threat Detection</a></li>
<li><a href="https://HarvestingClouds.com/post/demystifying-azure-security-azure-sql-database-set-server-firewall/" target="_blank">Azure SQL Database - Set Server Firewall</a></li>
<li><a href="https://HarvestingClouds.com/post/demystifying-azure-security-azure-sql-database-firewall-rule-for-virtual-networks/" target="_blank">Azure SQL Database - Firewall Rule for Virtual Networks</a></li>
<li><a href="https://HarvestingClouds.com/post/demystifying-azure-security-azure-sql-database-and-azure-storage-service-endpoints-on-virtual-network/" target="_blank">Azure SQL Database and Azure Storage - Service Endpoints on Virtual Network</a></li>
<li><a href="https://HarvestingClouds.com/post/demystifying-azure-security-azure-sql-database-dynamic-data-masking/" target="_blank">Azure SQL Database - Dynamic Data Masking</a></li>
</ol>]]></description>
<link>https://HarvestingClouds.com/post/demystifying-azure-security-series-index</link>
<pubDate>Tue, 20 Mar 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Site Recovery (ASR) - Series Index</title>
<description><![CDATA[<p>Outages can happen anytime. There can be different reasons that you may encounter disruptions in the service. It is always good to be prepared. Azure Site Recovery (ASR) ensure business continuity by providing you with a backup plan in case of an outage or a disaster level event. </p>
<p>This series talks about various aspects of working with <strong>Azure Site Recovery</strong> or <strong>ASR</strong>. </p>
<p>This blog is an <strong>Index</strong> of various blogs in the series:</p>
<ul>
<li><a href="https://HarvestingClouds.com/post/troubleshooting-azure-site-recovery-asr-data-replication-not-working/" target="_blank">Troubleshooting Azure Site Recovery (ASR) - Data Replication Not Working</a></li>
<li><a href="https://HarvestingClouds.com/post/troubleshooting-azure-site-recovery-asr-data-replication-initiation-issues-part-2/" target="_blank">Troubleshooting Azure Site Recovery (ASR) - Data Replication Initiation Issues</a></li>
<li><a href="https://HarvestingClouds.com/post/azure-site-recovery-asr-new-feature-added-to-target-resource-groups/" target="_blank">Azure Site Recovery (ASR) - New feature added to target Resource Groups</a></li>
<li><a href="https://HarvestingClouds.com/post/suspending-and-resuming-azure-site-recovery-asr-replication-on-a-single-or-multiple-servers/" target="_blank">Suspending and Resuming Azure Site Recovery (ASR) Replication on a single or multiple servers</a></li>
<li><a href="https://HarvestingClouds.com/post/asr-setup-for-vms-running-in-vmware-without-vmware-level-user-access/" target="_blank">ASR Setup for VMs running in VMWare without VMware level User Access</a></li>
<li><a href="https://HarvestingClouds.com/post/getting-started-azure-site-recovery-asr-in-new-azure-portal/" target="_blank">Getting Started - Azure Site Recovery (ASR) In New Azure Portal</a></li>
</ul>]]></description>
<link>https://HarvestingClouds.com/post/azure-site-recovery-asr-series-index</link>
<pubDate>Mon, 19 Mar 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Demystifying Azure Security - Azure SQL Database - Dynamic Data Masking</title>
<description><![CDATA[<p>This blog post is part of the <strong>Demystifying Azure Security</strong> series. All posts in the series can be found here: <a href="https://HarvestingClouds.com/post/demystifying-azure-security-series-index/" target="_blank">Demystifying Azure Security - Series Index</a></p>
<p><strong>Dynamic Data Masking</strong> is a feature of Azure SQL Databases, that allows you to hide the sensitive data. E.g. your database contains information regarding the Credit Cards of your customers. When exposing the database you want to ensure that the credit cards are not exposed. They should automatically be presented in the format &quot;xxxx-xxxx-xxxx-1234&quot; i.e. only exposing the last 4 digits. </p>
<h3>Feature Basics</h3>
<p>This feature can be accessed by navigating to your database and then clicking on the &quot;<strong>Dynamic Data Masking</strong>&quot; option under settings. By default, there are no masks applied. Click on &quot;<strong>+ Add mask</strong>&quot; to add a new mask.</p>
<p><strong>Note</strong> that whatever masks you apply are not applied to the administrators. Additionally, you can provide the SQL users who should be excluded from masking. </p>
<p>Azure SQL Database will also automatically try to recommend the fields that should be masked.</p>
<img src="/images/15224833815abf40b536b22.png" alt="Navigating to the Dynamic Data Masking Option">
<h3>Adding Masking Rules</h3>
<p>When adding Masking Rules you provide below information:</p>
<ol>
<li>Name for the mask is auto-populated (based on your selections)</li>
<li>Schema</li>
<li>Table in that schema</li>
<li>Column in the table, where mask should be applied</li>
<li>Masking Criteria</li>
</ol>
<p><strong>Note</strong> that the Masking criteria vary based on the type of the column. E.g. If a column does not have numerical value then the masking criteria for &quot;Number (random number range)&quot; will show as disabled.</p>
<img src="/images/15224833865abf40bad27e8.png" alt="Adding Masking Rule">
<p>The different masking criteria that you can apply are:</p>
<ol>
<li>Default value  (0, xxxx, 01-01-1900)</li>
<li>Credit Card value (xxxx-xxxx-xxxx-1234)</li>
<li>Email (aXXX@XXXX.com)</li>
<li>Number (random number range)</li>
<li>Custom string (prefix [padding] suffix)</li>
</ol>]]></description>
<link>https://HarvestingClouds.com/post/demystifying-azure-security-azure-sql-database-dynamic-data-masking</link>
<pubDate>Sun, 11 Mar 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Demystifying Azure Security - Azure SQL Database and Azure Storage - Service Endpoints on Virtual Network</title>
<description><![CDATA[<p>This blog post is part of the <strong>Demystifying Azure Security</strong> series. All posts in the series can be found here: <a href="https://HarvestingClouds.com/post/demystifying-azure-security-series-index/" target="_blank">Demystifying Azure Security - Series Index</a></p>
<p>Azure <strong>Service Endpoints</strong> allow access to SQL or Storage services over the network, without going out of the network. </p>
<p>To configure this feature, you can navigate to your Virtual Network and then under the settings, select the &quot;<strong>Service endpoints</strong>&quot;. Click on &quot;+Add&quot; to add a Service Endpoint.</p>
<img src="/images/15224831715abf3fe37c1c1.png" alt="Navigating to Service Endpoints">
<p>In the popup, select the provider for which you want to configure the Service Endpoint. </p>
<p>Service Endpoints on the Virtual Networks are available for:</p>
<ol>
<li>Microsoft.Sql provider</li>
<li>Microsoft.Storage provider</li>
</ol>
<p>Also, select the subnet on which you want to configure the Service Endpoint and then hit &quot;<strong>Add</strong>&quot;.</p>
<img src="/images/15224831785abf3fea2a4a8.png" alt="Adding Service Endpoint">
<p>It will take some time (approximately 15 minutes) to configure the Service Endpoints at the backend. Once configured, you will see the configured endpoints in the portal as shown below.</p>
<img src="/images/15224831835abf3fef3682d.png" alt="Deployed Service Endpoint">
<p><strong>Note</strong> that even after you configure service endpoint for SQL you will need to allow access at the SQL Server level as well. Service Endpoint ensures that the communication will happen at the network level. The Firewall configuration for the network is needed to allow that communication via Firewall on the SQL Server. This is explained in detail here: <a href="https://HarvestingClouds.com/post/demystifying-azure-security-azure-sql-database-firewall-rule-for-virtual-networks/" target="_blank">Azure SQL Database - Firewall Rule for Virtual Networks</a></p>
<p>Overall, this is a very powerful feature that is easy to configure and provides you with lots of flexibility. </p>]]></description>
<link>https://HarvestingClouds.com/post/demystifying-azure-security-azure-sql-database-and-azure-storage-service-endpoints-on-virtual-network</link>
<pubDate>Sat, 10 Mar 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Demystifying Azure Security - Azure SQL Database - Firewall Rule for Virtual Networks</title>
<description><![CDATA[<p>This blog post is part of the <strong>Demystifying Azure Security</strong> series. All posts in the series can be found here: <a href="https://HarvestingClouds.com/post/demystifying-azure-security-series-index/" target="_blank">Demystifying Azure Security - Series Index</a></p>
<p>Before going through this blog, please make sure you have covered these:</p>
<ul>
<li><a href="https://HarvestingClouds.com/post/demystifying-azure-security-azure-sql-database-set-server-firewall/" target="_blank">Set Server Firewall for Azure SQL Databases</a></li>
<li><a href="https://HarvestingClouds.com/post/demystifying-azure-security-azure-sql-database-and-azure-storage-service-endpoints-on-virtual-network/" target="_blank">Service Endpoints for Azure SQL on Virtual Network</a>. <strong>Note</strong> that this is pre-requisite for the firewall rules to be configured on the Virtual Network level.</li>
</ul>
<p>Setting up the <strong>Firewall Rule for Virtual Networks</strong>, at the SQL Server level, enables you to allow access to a subnet in a Virtual Network in Azure on all the SQL Databases on the SQL Server. The firewall rules are always set at the server level, hence any rule you put will allow access on all the databases on the SQL Server.</p>
<p>Setting up the Rules is very simple. You navigate to the firewall settings for the SQL Database/SQL Server (as discussed in the previous blog). Then you focus on the lower section on the blade for Virtual Network Firewall rules as shown below.</p>
<img src="/images/15225265255abfe93da6c61.png" alt="Virtual Network Firewall Rules section">
<p>Note that you can:</p>
<ol>
<li>Add existing virtual network</li>
<li>Create a new virtual network (and provide access)</li>
</ol>
<p>As a best practice, you should plan the virtual network and subnets before the configurations on the SQL Server Firewall. </p>
<p>When you click on &quot;Add existing virtual network&quot; you are presented with the below wizard. Here you select:</p>
<ol>
<li>The name for the rule. This could be any descriptive name for the rule.</li>
<li>The Subscription where the virtual network exists</li>
<li>Virtual network where you want to allow the access</li>
<li>Subnet name within the Virtual network where you want to allow the access</li>
</ol>
<img src="/images/15224829335abf3ef529038.png" alt="Create/Update Virtual Network Firewall Rule">
<p>Below, is the screenshot of the rule with values populated. If the Service Endpoint is not enabled for the &quot;Microsoft.Sql&quot; provider then you will view a message for the same and the wizard will attempt to enable the same.</p>
<img src="/images/15224829395abf3efbae000.png" alt="Virtual Network Firewall Rule Populated">
<p>Thats all there is to it. Just hit Ok and then hit Save to apply the rule.</p>]]></description>
<link>https://HarvestingClouds.com/post/demystifying-azure-security-azure-sql-database-firewall-rule-for-virtual-networks</link>
<pubDate>Sun, 04 Mar 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Demystifying Azure Security - Azure SQL Database - Set Server Firewall</title>
<description><![CDATA[<p>This blog post is part of the <strong>Demystifying Azure Security</strong> series. All posts in the series can be found here: <a href="https://HarvestingClouds.com/post/demystifying-azure-security-series-index/" target="_blank">Demystifying Azure Security - Series Index</a></p>
<p>Azure SQL Databases have a powerful layer of security at the SQL Server level. This layer is provided by the SQL Server <strong>Firewall</strong>. Azure provides you granular control to configure this firewall and to manage who gets access to your Azure SQL Database. By default, everything is blocked by the firewall. If you want to get access to Azure SQL Database then you will have to configure the Firewall at the SQL Server level. Only the IP addresses you configure have access to the SQL Databases on the Server.</p>
<p>Another key point to understand is that once you configure a rule then because that rule is applied at the server level, it is applied to all the SQL Databases on that Server. So it is important to ensure that you segregate your databases on different SQL Servers if you don't want to share the access to those databases.</p>
<h3>Accessing the Firewall Settings</h3>
<p>You can access the firewall settings by navigating to your Azure SQL <strong>Database</strong>. Then at the top of the blade, you will find the option for &quot;<strong>Set server firewall</strong>&quot;. Click on this button to access the firewall settings.</p>
<img src="/images/15224826395abf3dcf84ca9.png" alt="Set Server Firewall option on Azure SQL Database">
<p>Another way to access the settings is on Azure SQL <strong>Servers</strong>. Navigate to the related Azure SQL Server for your database. Under the settings, find the option for &quot;<strong>Firewalls and virtual networks</strong>&quot;. Clicking on this will also take you to the same firewall settings as the settings are set at the server level in both ways.</p>
<img src="/images/15224826455abf3dd50a424.png" alt="Firewall option on Azure SQL Server">
<h3>Firewall Rule Settings</h3>
<p>Let us look at the firewall rule settings in more details. </p>
<ol>
<li>First, you have the option to enable or disable the <strong>access to Azure services</strong> as the toggle for &quot;Allow access to Azure services&quot;. This is set to On by default when creating the Database and SQL server. You can disable it here. This option allows Azure services to provide monitoring data and recommend changes at the database level.</li>
<li>Second, you have the option to <strong>configure a Rule</strong>. This is where you configure which IP Addresses have access to the SQL Server.</li>
<li>One commonly used Rule is to open the access for <strong>Client IP</strong>. This is the IP address of the machine from where you are connected to the Azure Portal. This is provided for the common scenario where you want to connect and access the SQL database via SSMS (i.e. SQL Server Management Studio) or by any other way from your development box. There is a button to simply add the rule for allowing Client IP by clicking on &quot;<strong>+Add client IP</strong>&quot;. The client IP is also authomatically displayed under the &quot;Allow access to Azure services&quot; section</li>
<li>Lastly, there is an option to allow access to the SQL databases from a particular Subnet in a Virtual Network in Azure, without having to manually configure their IP addresses. This feature requires <strong>Service Endpoints</strong> to be configured at the Virtual Network level. We will discuss this feature in a later blog. </li>
</ol>
<img src="/images/15224826505abf3ddaaf183.png" alt="Firewall Settings">
<p>A typical Rule contains:</p>
<ol>
<li>A <strong>Rule Name</strong> - which could be any descriptive name for the rule. E.g. it could be name of the single VM for which you want to configure the access or it could be name of the network for which you are opening the access</li>
<li><strong>Start IP</strong> - Start of the IP address range for which you want to open the access. E.g. if you want to open access for 10.20.1.0/24 then the Start IP will be 10.20.1.0</li>
<li><strong>End IP</strong> - End of the IP address range for which you want to open the access. E.g. if you want to open access for 10.20.1.0/24 then the End IP will be 10.20.1.255</li>
</ol>
<p>Note: If you want to provide access to only one IP address then provide that IP address for both Start and End IP fields.</p>
<p>Note in the image below that:</p>
<ol>
<li>The client IP rule was added by clicking on the &quot;+Client IP&quot; button. Note that the Start and End IP are same for this rule</li>
<li>Second is a custom rule which allows access to the SQL Server from a specific VM hosting Website (which will need access to the database)</li>
</ol>
<img src="/images/15224826565abf3de09ea3f.png" alt="Setting Firewall Rules">
<p>Once you are done configuring, just hit Save to apply the firewall rules.</p>
<p>Next, we will learn about setting up the <a href="https://HarvestingClouds.com/post/demystifying-azure-security-azure-sql-database-firewall-rule-for-virtual-networks/" target="_blank">Firewall Rule for Virtual Networks</a> and also <a href="https://HarvestingClouds.com/post/demystifying-azure-security-azure-sql-database-and-azure-storage-service-endpoints-on-virtual-network/" target="_blank">Service Endpoints for Azure SQL on Virtual Network</a></p>]]></description>
<link>https://HarvestingClouds.com/post/demystifying-azure-security-azure-sql-database-set-server-firewall</link>
<pubDate>Fri, 02 Mar 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Resources - New Azure Log Analytics Language Cheat Sheets</title>
<description><![CDATA[<p>Azure Log Analytics (part of the OMS suite) has a very versatile query language. To investigate and report on the data you need to know the query language at least at the basic level. Recently the language had a complete overhaul with new syntax coming in and various new features being incorporated into the new language. This blog post talks about the resources to quickly learn the new syntax. Specifically, if you know the older syntax or you know T-SQL syntax then how to translate that knowledge. </p>
<h3>Older to new Query Language syntax</h3>
<p>If you have been working with the older Log Analytics query syntax, then you have two options to convert that knowledge to newer query language syntax:</p>
<ol>
<li>Use the in portal legacy syntax converter and learn as you convert</li>
<li>Use the Microsoft provided Cheat Sheet</li>
</ol>
<p>When you navigate to OMS log analytics portal and go to the Log search section, there you will see a link above the query text window for &quot;<strong>Show legacy language converter</strong>&quot;. When you click on this link a new text box will appear above the query text box. Type your legacy query and then click on &quot;<strong>Convert</strong>&quot; button. The query will be converted into the new language syntax. Click Run to execute the query. If there will be any errors in the query you will be notified of the same. </p>
<img src="/images/15212342425aac314289ed9.png" alt="OMS Language Converter">
<p>In the above example, &quot;Event&quot; type is being fetched and then only Source, EventLog, EventID properties are selected. In the older format the query syntax used to be:</p>
<pre><code>Type=Event | select Source, EventLog, EventID</code></pre>
<p>In the newer format the same query looks as below:</p>
<pre><code>Event | project Source, EventLog, EventID</code></pre>
<p>Pointers for key query syntax can be found in the complete cheat sheet which can be found here: <a href="https://docs.loganalytics.io/docs/Learn/References/Legacy-to-new-to-Azure-Log-Analytics-Language" target="_blank">Legacy to new Azure Log Analytics Query Language cheat sheet</a></p>
<h3>T-SQL to new Query Language syntax</h3>
<p>If you are well versed in the T-SQL query syntax and are new to OMS Azure Log Analytics, then you can easily translate that to the Log Analytics query language with the help of the cheat sheet provided by Microsoft for the key syntax.</p>
<p>E.g. if we want to select records for only columns name and resultCode from a table named dependencies then the query syntax in T-SQL will look like:</p>
<pre><code>SELECT name, resultCode FROM dependencies</code></pre>
<p>Syntax for the same query in newer Log Analytics language will look like:</p>
<pre><code>dependencies 
| project name, resultCode</code></pre>
<p>As you might have guessed already, &quot;project&quot; is a key word in newer language to select specific columns. Selecting a table is as simple as typing the name of the table.</p>
<p>The complete cheat sheet can be found here: <a href="https://docs.loganalytics.io/docs/Learn/References/SQL-to-Azure-Log-Analytics" target="_blank">SQL to Azure Log Analytics query language cheat sheet</a></p>]]></description>
<link>https://HarvestingClouds.com/post/resources-new-azure-log-analytics-language-cheat-sheets</link>
<pubDate>Mon, 19 Feb 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Demystifying Azure Security - Azure SQL Database - Transparent Data Encryption</title>
<description><![CDATA[<p>This blog post is part of the <strong>Demystifying Azure Security</strong> series. All posts in the series can be found here: <a href="https://HarvestingClouds.com/post/demystifying-azure-security-series-index/" target="_blank">Demystifying Azure Security - Series Index</a></p>
<p><strong>Transparent Data Encryption</strong> (<strong>TDE</strong>) is the automated encryption of your data at rest. If configured it encrypts your database, backups of the database and transactional log files at rest. Normally this is configured by default to provide you with an additional layer of security. If this is not configured then you will get a recommendation to configure it in the Azure Security Center. </p>
<p>Turing Off Transparent data encryption will result in decryption of the complete database and will leave your data vulnerable. When you turn it back On then the database will be encrypted again. Depending upon the size of your database, it may take some time to turn the TDE on or off due to the underlying encryption/decryption process.</p>
<p>This service does not require any changes at the application level. Behind the scene, transparent data encryption performs real-time I/O encryption and decryption of the data at the page level. Each page is decrypted when it's read into memory and then encrypted before being written to disk.</p>
<p><strong>Note</strong>: Even if the database is encrypted with TDE, when you take an export of the database (e.g. creation of BACPAK file) then the backup file is created without encryption. You need to ensure that you safeguard/encrypt the backup files before sharing these on an open network.</p>
<h3>Configuring TDE at the Database level</h3>
<p><strong>Transparent Data Encryption</strong> (<strong>TDE</strong>) can be enabled or disabled at every individual Database level. The configuration is a very simple toggle between on and off. To configure this, navigate to your Azure SQL Database. In the settings, select &quot;Transparent Data Encryption&quot;. The set the value for &quot;Data Encryption&quot; On or Off. </p>
<p>Notice the Encryption status. If you want your data to be encrypted, then the encryption status should say &quot;Encrypted&quot; with a green tick mark. </p>
<img src="/images/15221022915ab9701353492.png" alt="Configuring Transparent Data Encryption">
<h3>Configuring to use your own Key with TDE</h3>
<p>You can <strong>use your own Key for encryption</strong> with Transparent Data Encryption. If you do not configure to use your own key, then a service managed certificate is used for encryption and decryption.</p>
<p>To do this you will need to upload your key to an Azure Key Vault or generate a new key within the Key Vault, which is very easy to configure. Once you have a key in an Azure Key Vault, you will be able to use the same with Transparent Data Encryption (TDE). </p>
<p>This setting can't be configured at a Database level. Instead, this has to be configured at the server level. Navigate to the underlying Azure SQL Server (where the SQL Database is hosted). Then follow the below steps:</p>
<ol>
<li>In the settings, click on the Transparent Data Encryption</li>
<li>Select &quot;Yes&quot; to Use your own key. </li>
<li>Then click on &quot;Select a Key&quot; and then select the key from your Azure Key Vault. Alternatively, you can select to &quot;Enter Key Identifier&quot;. </li>
<li>Once the key is configured, select &quot;Save&quot; to save the settings.</li>
</ol>
<img src="/images/15221022965ab9701842974.png" alt="Using own Key for encryption with Transparent Data Encryption">
<p>This option provides you with all the security at the data level (at rest) while ensuring you have complete control over the process.</p>]]></description>
<link>https://HarvestingClouds.com/post/demystifying-azure-security-azure-sql-database-transparent-data-encryption</link>
<pubDate>Sat, 17 Feb 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Demystifying Azure Security - Azure SQL Database - Auditing & Threat Detection</title>
<description><![CDATA[<p>This blog post is part of the <strong>Demystifying Azure Security</strong> series. All posts in the series can be found here: <a href="https://HarvestingClouds.com/post/demystifying-azure-security-series-index/" target="_blank">Demystifying Azure Security - Series Index</a></p>
<p><strong>Auditing &amp; Threat Detection</strong> in Azure SQL Database is a very simple to configure yet very powerful security feature. <strong>Auditing</strong> feature audits all activity on your database to a Storage Account. You can determine the number of days for which you want to retain the data. It helps you remain compliant. In an event of any failure or compliance breach, you can go to the audit logs and can pinpoint the exact cause of the issue if this feature is enabled. </p>
<p><strong>Threat Detection</strong> is an advanced feature, where Microsoft runs various algorithms under the hood and determines the pattern and identifies any potential attacks on your data. E.g. SQL Injection or patterns like SQL Injection can be detected when this feature is enabled. Please note that the Threat Detection feature has additional cost linked to it. It costs $15/server/month. It will be free for the first 60 days. Note that you can enable Auditing without enabling Threat Detection. But you can't enable Threat Detection without enabling Auditing on the data first. </p>
<p>SQL Threat Detection integrates alerts with Azure Security Center. If any anomalous activity is detected an alert is raised, you can get notification via email and can also review the same within the portal. You get real-time actionable alerts. Each alert also contains the information regarding how to mitigate the alert.</p>
<h3>Configurations</h3>
<p>To configure Auditing and Threat Detection at the database level, navigate to the database. Then follow the below steps:</p>
<ol>
<li>In the database settings, click on &quot;Auditing and Threat Detection&quot;</li>
<li>You can optionally configure the settings at the Server level by click on the link &quot;View server settings&quot;</li>
<li>Next, toggle the &quot;Auditing&quot; setting on or off. Select the storage account and retention in the number of days. </li>
<li>Next, you can configure the &quot;Threat Detection&quot; on or off. If you toggle it on, then you have the option of selecting which type of Threats you want to detect.</li>
<li>You also have the option of configuring Email notifications which work with the Threat Detection.</li>
</ol>
<img src="/images/15220454795ab89227d4b60.png" alt="Auditing and Threat Detection Configuration">
<p>When configuring Audit Logs Storage, you can select any subscription under the tenant and a storage account in that subscription. You can then select Retention in number of Days. When this number is set to Zero then that means unlimited retention. You can select a maximum of 3285 number of days for this value. You can also select whether to use a Primary or Secondary key while accessing the Storage Account for writing the logs.</p>
<img src="/images/15220454845ab8922cbf5d7.png" alt="Audit Logs Storage Configurations">
<p>Under Threat Detection types, you can select any one or all of the following types:</p>
<ol>
<li>SQL injection</li>
<li>SQL injection vulnerability</li>
<li>Anomalous client login</li>
</ol>
<img src="/images/15220454895ab8923166b5d.png" alt="Threat Detection Types">
<h3>Enabling at Database level vs Server level</h3>
<p>If Blob Auditing or Threat Detection are enabled on the server, they will always apply to the database, regardless of the database settings.</p>
<p>At the server level, the configuration is almost identical. You need to navigate to the related Azure SQL Server first (instead of the SQL Database). Notice at the top of the below screenshot, it says &quot;SQL server&quot; instead of &quot;SQL database&quot;. Then navigate to it's &quot;Auditing and Threat Detection&quot; section and perform the configurations similar to above sections. </p>
<img src="/images/15220454945ab8923632728.png" alt="Audit and Threat Detection at the server level">]]></description>
<link>https://HarvestingClouds.com/post/demystifying-azure-security-azure-sql-database-auditing-threat-detection</link>
<pubDate>Sat, 10 Feb 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Demystifying Azure Security - Custom RBAC Roles</title>
<description><![CDATA[<p>This blog post is part of the <strong>Demystifying Azure Security</strong> series. All posts in the series can be found here: <a href="https://HarvestingClouds.com/post/demystifying-azure-security-series-index/" target="_blank">Demystifying Azure Security - Series Index</a></p>
<p>Before going through this blog, please ensure that you have visited the basics of RBAC Roles in general, explained in a primer here: <a href="https://HarvestingClouds.com/post/demystifying-azure-security-rbac-roles/" target="_blank">Demystifying Azure Security - RBAC Roles</a>.</p>
<p>This blog explains an easy approach to understand and create your own Custom RBAC Roles in Azure (ARM model). We will inspect an existing simple role and will reverse engineer the way to create Custom RBAC Roles. </p>
<h3>Inspecting Existing Role's Definition</h3>
<p>Start by inspecting any existing Role's Definition. To do this run the cmdlet <strong>Get-AzureRMRoleDefinition</strong> and provide the name of any built-in RBAC Role. For this blog, run the below script to inspect the &quot;Reader&quot; and &quot;Virtual Machine Contributor&quot; roles.</p>
<pre><code>Login-AzureRMAccount

Get-AzureRMRoleDefinition -Name "Reader" | ConvertTo-Json | Out-File C:\rbacrole-reader.json

Get-AzureRMRoleDefinition -Name "Virtual Machine Contributor" | ConvertTo-Json | Out-File C:\rbacrole-VMContributor.json</code></pre>
<p>If you open and inspect the &quot;rbacrole-reader.json&quot; file you will see the JSON similar to below:</p>
<pre><code>{
    "Name":  "Reader",
    "Id":  "aaaa11a1-3333-48ef-bd42-f606fba81ae7",
    "IsCustom":  false,
    "Description":  "Lets you view everything, but not make any changes.",
    "Actions":  [
                    "*/read"
                ],
    "NotActions":  [

                   ],
    "AssignableScopes":  [
                             "/"
                         ]
}</code></pre>
<p>Notice above that there are below sections in the definition:</p>
<ol>
<li><strong>Name</strong> - Name of the role</li>
<li><strong>Id</strong> - unique guid for the role</li>
<li><strong>IsCustom</strong> - boolean value. &quot;true&quot; for the Custom Roles and &quot;false&quot; for the built-in roles</li>
<li><strong>Description</strong> - description of the role</li>
<li><strong>Actions</strong> - Allowed list of actions for the Role</li>
<li><strong>NotActions</strong> - Not Allowed list of actions for the Role</li>
<li><strong>AssignableScopes</strong> - Scope at which this role can be assigned. E.g. all the subscription Ids. It's mandatory that the RBAC role contains the explicit subscription IDs where it is used otherwise you will not be able to use the role.</li>
</ol>
<h3>Understanding Actions and NotActions</h3>
<p>As mentioned before, Actions describe the allowed list of action for the Role whereas the NotActions describe the not allowed actions for the Role. You can use wildcard * and special syntax to define the Actions and NotActions, as per the Microsoft documentation here: <a href="https://docs.microsoft.com/en-us/azure/active-directory/role-based-access-control-custom-roles" target="_blank">Create custom roles for Azure Role-Based Access Control</a>:</p>
<p>Operation strings that contain wildcards (*) grant access to all operations that match the operation string. For instance:</p>
<ul>
<li><code>*/read</code> grants access to read operations for all resource types of all Azure resource providers.</li>
<li><code>Microsoft.Compute/*</code> grants access to all operations for all resource types in the Microsoft.Compute resource provider.</li>
<li><code>Microsoft.Network/*/read</code> grants access to read operations for all resource types in the Microsoft.Network resource provider of Azure.</li>
<li><code>Microsoft.Compute/virtualMachines/*</code> grants access to all operations of virtual machines and its child resource types.</li>
<li><code>Microsoft.Web/sites/restart/Action</code> grants access to restart websites.</li>
</ul>
<p>Use <code>Get-AzureRmProviderOperation</code> (in PowerShell) or <code>azure provider operations show</code> (in Azure CLI) to list operations of Azure resource providers. You may also use these commands to verify that an operation string is valid and to expand wildcard operation strings.</p>
<pre><code>Get-AzureRMProviderOperation Microsoft.Compute/virtualMachines/*/action | FT Operation, OperationName

Get-AzureRMProviderOperation Microsoft.Network/*</code></pre>
<h3>Defining a Custom role</h3>
<p>Let's define a custom role, who can start or restart a VM in Azure but can't open a support ticket.</p>
<p>Save the following code as &quot;yourCustomRole01.json&quot; file on your C drive (or any other location).</p>
<pre><code>{
  "Name": "Virtual Machine Start and Restart",
  "Id": "7ed03a9f-b372-4341-ba8d-38ef8e614038",
  "IsCustom": true,
  "Description": "Can restart virtual machines but can't open support tickets.",
  "Actions": [
    "Microsoft.Compute/virtualMachines/start/action",
    "Microsoft.Compute/virtualMachines/restart/action"
  ],
  "NotActions": [
    "Microsoft.Support/*"
  ],
  "AssignableScopes": [
    "/subscriptions/aaaaaaaa-1111-1111-1111-111111111111",
    "/subscriptions/aaaaaaaa-2222-2222-2222-222222222222",
    "/subscriptions/aaaaaaaa-3333-3333-3333-333333333333/resourceGroups/RG-Prod-USE2"
  ]
}</code></pre>
<p>Notice the Action and NotAction area are set as per the requirements.</p>
<p>Also, note that under Assignable Scope the role is available for assignment to all Resources and Resource Groups in the first two subscriptions but only in Resource Group named &quot;RG-Prod-USE2&quot; for the third subscription in the list.</p>
<h3>Creating New Custom role</h3>
<p>Once you have the definition ready in a JSON file, you can use the &quot;<strong>New-AzureRMRoleDefinition</strong>&quot; cmdlet to create the Custom Role Definition, as shown below. Make sure to alter the path to the json file as per your environment.</p>
<pre><code>New-AzureRMRoleDefinition -InputFile "C:\yourCustomRole01.json"</code></pre>
<p>Now you will be able to use this new Custom Role while assigning access to someone. You will be able to tweak the access and provide only the access that you need to your internal and external resources.</p>]]></description>
<link>https://HarvestingClouds.com/post/demystifying-azure-security-custom-rbac-roles</link>
<pubDate>Sun, 04 Feb 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Demystifying Azure Security - Azure Policies - Initiative Definitions</title>
<description><![CDATA[<p>This blog post is part of the <strong>Demystifying Azure Security</strong> series. All posts in the series can be found here: <a href="https://HarvestingClouds.com/post/demystifying-azure-security-series-index/" target="_blank">Demystifying Azure Security - Series Index</a></p>
<p><strong>Initiative Definitions</strong> are a great way to combine and apply multiple policies together. They are a group of policy definitions to achieve a singular goal. These are the Microsoft recommended way to use the Policies.  </p>
<p>E.g. You want to combine a set of policy definitions for the compute and related resources. You want to apply following set of policies:</p>
<ol>
<li>Allowed locations - to restrict locations where resources can be deployed</li>
<li>Allowed SKUs - to restrict SKUs for the VMs e.g. VMs can be created with SKU Standard_DS2_v2 only</li>
<li>Enforce Tags and it's default value - to enforce the usage of Tags on resources</li>
</ol>
<p>Instead of managing and assigning the policies separately, you can club these policies together in an <strong>Initiative Definition</strong> and then assign the same.</p>
<h3>Defining Initiative Definition</h3>
<p>To create a new Initiative Definition you go to your Subscription and then go to the Policies section under Settings. Within Policies go to the <strong>Definitions</strong> section under Authoring. Here you can click on the &quot;Initiative Definitions&quot; tab to view the existing definitions. Click on the &quot;+Initiative definition&quot; to create a new Initiative Definition.</p>
<img src="/images/15220373735ab8727dc341d.png" alt="Creating Initiative Definition">
<p>The new Initiative Definition blade with open up. Here you can create the definition. </p>
<ol>
<li>The first section, as shown below, is similar to a Policy definition. You will provide the basic information here like Definition location, Name, Description and Category. </li>
<li>On the right side, select the Policies which want to be part of this Initiative Definition in the section &quot;Available Definitions&quot;. Select and Add all the policies that you want to group together.</li>
<li>Configure parameters for the selected Policies in the &quot;Policies and Parameters&quot; section. If you selected a wrong policy, you can delete the policy in this section as well. </li>
</ol>
<img src="/images/15220373785ab87282e8f55.png" alt="Initiative Definition Blade">
<h3>Initiative Parameters</h3>
<p>Continuing with the previous section of the Policy definition, you have lots of options when configuring parameters under the &quot;Policies and Parameters&quot; section. You can either &quot;<strong>Set value</strong>&quot; right within the Initiative Definition, or you can &quot;<strong>Use Initiative Parameter</strong>&quot;. Set the values within the definition if you don't want to change during the assignment. If you want to set the values dynamically then use the Initiative Parameters.</p>
<p><strong>Initiative Parameters</strong> are used to parameterize the Initiative Definition. These can be set from the list of allowed values (a subset of all the values) during the assignment.</p>
<img src="/images/15220373835ab87287b145a.png" alt="Options for setting values for Parameters">
<p>When you select to use an Initiative Parameter for any value, then a parameter is automatically created. </p>
<img src="/images/15220373895ab8728d4f4b0.png" alt="Initiative Parameters">
<p>You can create your own Initiative parmeters as well. If an Initiative parameter is not being used then you won't be able to save that definition. You will get &quot;Bad Request&quot; while trying to save the definition.</p>
<img src="/images/15220373945ab87292397f6.png" alt="Option for creating new Initiative Parameter">
<h3>Assigning Initiative Definition</h3>
<p>The Initiative assignment is exactly same as the Policy Assignment. All the options are similar as well. You provide value to the parameters within the Initiative Definition for all the policies, during the assignment.</p>
<img src="/images/15220373995ab872973ab67.png" alt="Assigning Initiative Definition">]]></description>
<link>https://HarvestingClouds.com/post/demystifying-azure-security-azure-policies-initiative-definitions</link>
<pubDate>Sun, 28 Jan 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Fixing the error - The subscription is not registered to use provider</title>
<description><![CDATA[<p>This blog shows you simple step to resolve the provider registration error, that you can come across while working programmatically with Azure (in Azure Resource Manager model).</p>
<p>You can come across this error while running a PowerShell cmdlet or anywhere else: <strong>The subscription 'xxxxxx-xxxx-xxx-xxxx-xxxxxxxx' is not registered to use providername</strong>. E.g. in the below screenshot I got the error <em>The subscription '8c665920-2c37-419f-81fb-99d737cc4697' is not registered to use microsoft.insights</em>. </p>
<img src="/images/15212142425aabe322e6b05.png" alt="Provider Error">
<p>To resolve this error you will need to navigate to the Azure Portal (i.e. <a href="https://portal.azure.com">https://portal.azure.com</a> ). Authenticate to the portal with your credentials. Navigate to Subscriptions section. If you don't see this section then click on &quot;All Services&quot; and then search for Subscription. Once in the subscriptions blade, select the subscription for which you have been getting the error. If there was only one subscription then click on that. In the selected subscription's blade follow the below steps:</p>
<ol>
<li>Click on the Resource Providers</li>
<li>In the list of providers, find the one for which you have been getting error and then click on &quot;Register&quot; link in front of it</li>
<li>The provider will go into &quot;Registering&quot; state</li>
<li>Once completed, the provider will then go into &quot;Registered&quot; state</li>
</ol>
<img src="/images/15212142785aabe346ede90.png" alt="Subscription Settings">
<p>Now retry the cmdlet or API you were trying before. It will not generate the provider error now. </p>]]></description>
<link>https://HarvestingClouds.com/post/fixing-the-error-the-subscription-is-not-registered-to-use-provider</link>
<pubDate>Mon, 22 Jan 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Demystifying Azure Security - Creating a Custom Policy - Part 3 - Defining your Custom Policy</title>
<description><![CDATA[<p>This blog post is part of the <strong>Demystifying Azure Security</strong> series. All posts in the series can be found here: <a href="https://HarvestingClouds.com/post/demystifying-azure-security-series-index/" target="_blank">Demystifying Azure Security - Series Index</a></p>
<p>As discussed earlier, the policies provide a way to control what is allowed and what is not allowed in your environment. Earlier we looked at how to view the definition of existing policies and discussed the structure of the policies in detail. Now it's time to put the knowledge together and define a policy.</p>
<h3>Getting the JSON ready for the Policy Definition</h3>
<p>Have the JSON ready for the Policy Definition. If you are going to deploy the policy via Portal then all you need is the Policy Rule portion of the definition. For this post, let's use the sample policy definition for &quot;<em>Only allow a certain VM platform image</em>&quot; located here: <a href="https://docs.microsoft.com/en-us/azure/azure-policy/scripts/allow-certain-vm-image">https://docs.microsoft.com/en-us/azure/azure-policy/scripts/allow-certain-vm-image</a></p>
<p>Various other samples are provided by Microsoft at this link: <a href="https://docs.microsoft.com/en-us/azure/azure-policy/json-samples" target="_blank">Templates for Azure Policy</a></p>
<p>This sample policy, enforces the end users to use only a certain version of the Ubuntu only. </p>
<p>The policy definition looks as below:</p>
<pre><code>{
    "type": "Microsoft.Authorization/policyDefinitions",
    "name": "platform-image-policy",
    "properties": {
        "displayName": "Only allow a certain VM platform image",
        "description": "This policy ensures that only UbuntuServer, Canonical is allowed from the image repository",
        "parameters": {},
        "policyRule": {
            "if": {
                "allOf": [
                    {
                        "field": "type",
                        "in": [
                            "Microsoft.Compute/disks",
                            "Microsoft.Compute/virtualMachines",
                            "Microsoft.Compute/VirtualMachineScaleSets"
                        ]
                    },
                    {
                        "not": {
                            "allOf": [
                                {
                                    "field": "Microsoft.Compute/imagePublisher",
                                    "in": [
                                        "Canonical"
                                    ]
                                },
                                {
                                    "field": "Microsoft.Compute/imageOffer",
                                    "in": [
                                        "UbuntuServer"
                                    ]
                                },
                                {
                                    "field": "Microsoft.Compute/imageSku",
                                    "in": [
                                        "14.04.2-LTS"
                                    ]
                                },
                                {
                                    "field": "Microsoft.Compute/imageVersion",
                                    "in": [
                                        "latest"
                                    ]
                                }
                            ]
                        }
                    }
                ]
            },
            "then": {
                "effect": "deny"
            }
        }
    }
}</code></pre>
<p>Look at the policy rule section. Let's look at various sections more closely.</p>
<ol>
<li><strong><em>if</em></strong> condition defines the policy condition</li>
<li><strong><em>allOf</em></strong> ensures that all the conditions must be true</li>
<li><strong><em>field type</em></strong> is specified as Disks, Virtual Machines and VM scale sets under Microsoft.Compute</li>
<li>Next section defines a nested condition, which evaluates true if actual VM image used is not equal to combination of all of the conditions specified. </li>
<li><strong><em>then</em></strong> section defines what should happen. In this case it says to <strong><em>deny</em></strong> the operation, i.e. the VM provisioning will fail with validation error for not conforming to the policy.</li>
</ol>
<p>Note that there are no parameters used in this policy definition.</p>
<h3>Defining Policy using Portal</h3>
<p>For defining the policies via Azure Portal, navigate to the Policies under settings of your Subscription. Click on the Definitions and then click on &quot;<strong>+Policy Definition</strong>&quot; at the top.</p>
<img src="/images/15218358115ab55f2336e11.png" alt="Defining New Policy">
<p>A new blade will open where you can define the policy. Provide the details as follows:</p>
<ol>
<li>Provide the policy definition's location. This will be your subscription in which you want the definition to exist.</li>
<li>Provide a Display Name and Description of the policy. Try to be as descriptive as possible</li>
<li>Either create a new Category for the policy or use one of the existing ones</li>
<li>Copy and paste your policy definition. In the portal, only provide the &quot;policyRule&quot; section. </li>
</ol>
<p>The section for policyRule will look something similar to below.</p>
<pre><code>{
    "policyRule": {
      all content here
    }
}</code></pre>
<img src="/images/15218358165ab55f28ad5a4.png" alt="New Policy Definition Blade">
<p>Hit <strong>Save</strong> to create the Policy Definition. You can now start assigning this policy in your environment.</p>
<h3>Defining Policy using PowerShell</h3>
<p>Ensure that you have latest version of Azure PowerShell installed. Then using PowerShell to deploy the policy is as easy as executing below two cmdlets:</p>
<ol>
<li><strong><em>New-AzureRmPolicyDefinition</em></strong> - to create the policy definition </li>
<li><strong><em>New-AzureRMPolicyAssignment</em></strong> - to use the policy and assign the policy at a scope defined</li>
</ol>
<p>Store the policy in a json file on your computer. Ensure that you only save the &quot;if-then&quot; condition in curly parenthesis. This will be used as an input to the cmdlet. The file should look similar to below:</p>
<pre><code>{
            "if": { &lt;&lt;content here&gt;&gt;},
            "then":{ &lt;&lt;content here&gt;&gt;}
}</code></pre>
<p>To create the policy definition use a code similar to below. Ensure to update the file name and path as per your environment.</p>
<pre><code>$definition = $definition = New-AzureRmPolicyDefinition -Name "RestrictingUbuntuVMVersion" -DisplayName "Restrict Ubuntu version for VM Deployment" -Description "Detailed Description here" -Policy "C:\temp\CustomPolicyDefinition.json"</code></pre>
<p>To use the policy, do the assignment using a code similar to below:</p>
<pre><code>$assignment = New-AzureRMPolicyAssignment -Name &lt;customAssignmentName&gt; -Scope &lt;SubscriptionId&gt;  -PolicyDefinition $definition</code></pre>
<p>That is all there is to defining and using your custom policy.</p>]]></description>
<link>https://HarvestingClouds.com/post/demystifying-azure-security-creating-a-custom-policy-part-3-defining-your-custom-policy</link>
<pubDate>Sun, 14 Jan 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Demystifying Azure Security - Just In Time VM access</title>
<description><![CDATA[<p>This blog post is part of the <strong>Demystifying Azure Security</strong> series. All posts in the series can be found here: <a href="https://HarvestingClouds.com/post/demystifying-azure-security-series-index/" target="_blank">Demystifying Azure Security - Series Index</a></p>
<p><strong>Just in time VM access</strong> is a feature under <strong>Azure Security Center</strong>. In simple terms it allows you to control access to a VM. When you enable JIT, all access is locked down on the VM on all ports. This is done via Network Security Group (NSG) rules. The access is granted only for the duration allowed and also only on the ports requested. And then everything is locked down again at the end of the duration.</p>
<p>The attackers are leveraging various ways to get into your environment. One such way is using Bots to automate and Brute Force method to attempt entering in your environment. If you need to access a VM, in your environment, from Internet without VPN (e.g. to change some files on a Web app VM) then you are potentially opening up 3389 port on the VM and that can become a target for the attackers. Locking down the VM except when you need it and only for the duration of the requirement, reduces these risks significantly. </p>
<h3>Pre-requisites</h3>
<p>The key pre-requisites to be able to use this feature are:</p>
<ol>
<li>The Azure Security Center needs to be upgraded to Advanced Security as shown below</li>
<li>The VM on which you want to configure JIT access should have a Network Security Group (NSG) linked to it. If it doesn't have any then you can create one and associate it with the network interface of the VM.</li>
</ol>
<p>The first pre-requisite requires you to upgrade the Security Center to Advanced Security that comes with <strong>Standard Tier</strong>. You can upgrade from any of the advanced features. The portal will automatically prompt you to upgrade as shown below:</p>
<img src="/images/15212356695aac36d55f4be.png" alt="Advanced Security in Security Center">
<h3>How to Access and Configure it</h3>
<p>To configure this go to Azure <strong>Security Center</strong>. Within Security Center go to the &quot;Advanced Cloud Defense&quot; category and then click on &quot;Just in time VM access&quot; link as shown below:</p>
<img src="/images/15212354315aac35e7b2c2d.png" alt="JIT in Security Center">
<p>Next follow the below steps to enable JIT on a VM:</p>
<ol>
<li>Ensure that you are in the JIT section of Security Center</li>
<li>Go to the &quot;Recommended&quot; tab. This will show you all the VMs in your environment. </li>
<li>Click on the VM for which you want to enable Just in Time access. You can select multiple VMs here.</li>
<li>Click on the &quot;Enable JIT on x VMs&quot; button</li>
</ol>
<img src="/images/15212369445aac3bd0c8b26.png" alt="Enabling JIT">
<p>You will be presented with various settings for enabling JIT. These settings can be configured as:</p>
<ol>
<li>By default, various common ports are configured for JIT access with 3 hours time range. E.g. 3389 for RDP. Click on any of the default ports to tweak the settings.</li>
<li>You can add more ports and protocols by clicking on the &quot;Add&quot; button at the top.</li>
<li>Tweak the settings on the new blade that opens up.</li>
<li>Click on &quot;Save&quot; to save your settings.</li>
</ol>
<img src="/images/15212369535aac3bd974b55.png" alt="Settings while enabling JIT">
<p>That's all there is to it. Once JIT is configured you can view the VM in the &quot;Configured&quot; tab.</p>
<h3>Requesting access and activity log in JIT</h3>
<p>You can request access and perform various other management operations by following below steps:</p>
<ol>
<li>Once the JIT is configured, the VM will show up in the &quot;Configured&quot; tab as shown below. </li>
<li>Click on the VM that you want to manage or request access to. </li>
<li>As soon as you click, &quot;Request access&quot; button will start showing. You can click the button and it will ask for the number of hours that you need the access and the corresponding ports on which you need the access. Please note that only ports configured earlier can be granted access.</li>
<li>Click on the ellipse i.e. 3 dots in front of the VM record and you will see various options. Using these options you can view Properties. You can view the Activity log for previous requests and any attempts on attack. You can also Edit or Remove the access.</li>
</ol>
<img src="/images/15212369585aac3bde79755.png" alt="Requesting access and activity log in JIT">
<h3>Adding non-Azure computers</h3>
<p>You can add non-Azure computers for an extra fee per Node. Cost is calculated based on 15 USD/Node/Month. Resources that count as a node are VMs and computers. The selected security tier will be applied to current and new resources. For more information, visit <a href="https://docs.microsoft.com/en-us/azure/security-center/security-center-pricing" target="_blank">Microsoft Security Center's Standard tier for enhanced security</a> information page.</p>
<p>Ensure that you have onboarded the non-Azure computers to a linked Azure Log Analytics workspace. This is required in order to onboard non-Azure computers to Security Center.</p>]]></description>
<link>https://HarvestingClouds.com/post/demystifying-azure-security-just-in-time-vm-access</link>
<pubDate>Wed, 10 Jan 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Troubleshooting Azure Site Recovery (ASR) - Data Replication Initiation Issues - Part 2</title>
<description><![CDATA[<p>This is the second part of troubleshooting Azure Site Recovery (ASR) Data replication issues. Part 1 is located here:  <a href="https://HarvestingClouds.com/post/troubleshooting-azure-site-recovery-asr-data-replication-not-working/" target="_blank">Troubleshooting Azure Site Recovery (ASR) - Data Replication Not Working</a></p>
<p>Go through the part 1 before this blog as that talks about the location of logs and various preliminary steps. This blog talks about the issue where you are not even able to enable Data Replication for a server. </p>
<h3>Preliminary Checks</h3>
<p>Perform the following basic checks related to connectivity:</p>
<ul>
<li>Ensure that the ASR management server has connectivity with Azure.</li>
<li>Ensure connectivity between the management server (process server) and the source machine which you are trying to replicate. Ensure that the source machine is accessible from the process server.
<ul>
<li>You will also need to enable and allow “File and Printer Sharing” on the source machine in the Windows Firewall. </li>
</ul></li>
<li>Ensure that the account that you use for enabling the protection has administrator rights on the source machine. This is needed for installation of the Mobility services agent on the source machine.</li>
<li>Also allow Windows Management Instrumentation (WMI) in the Windows Firewall, if not already.</li>
<li>Disable remote User Account Control (UAC) if you are using local administrator account to install the mobility service.</li>
</ul>
<h3>Checking Services</h3>
<p>Also, check whether the following services are running and configured correctly on the source machine:</p>
<ol>
<li>If <strong>Volume Shadow Copy(VSS)</strong> service Startup Type is set to Disabled, change it to Automatic.</li>
<li>If <strong>COM+ System Application(COMSysApp)</strong> service Startup Type is to Disabled, change it to Automatic.</li>
<li>If the Microsoft <strong>Distributed Transaction Coordinator Service (MSDTC)</strong> Startup Type is set to Disabled, change it to Automatic.</li>
<li>If the service Startup Types were already set to Automatic, check if COM+ enumeration succeeds. a) You check COM+ enumeration by Component Services (comexp.msc) b) Browse to Component Service -&gt; Computers -&gt; My Computer -&gt; COM+ Application. You should be able to expand the System Application node and see the contents under that node. c) Ensure that Volume Shadow Copy Service is listed under Component Service -&gt; Computers -&gt; My Computer -&gt; DCOM config</li>
</ol>
<h3>Alternate Option</h3>
<p>If for any reason you can't enable any of the above steps, e.g. if you can't provide admin access to the service account on the source machine or disable remote UAC, then you have an alternative option. You can install the Mobility services agent on the source machine prior to enabling the protection. You will still need the services to be configured appropriately but will not need admin access while enabling the protection. To install the mobility services agent, you have various options:</p>
<ul>
<li><a href="vmware-azure-mobility-install-configuration-mgr.md">Install using software deployment tools like System Center Configuration Manager</a></li>
<li><a href="vmware-azure-mobility-deploy-automation-dsc.md">Install with Azure Automation and Desired State Configuration (Automation DSC)</a></li>
<li><a href="vmware-azure-install-mobility-service.md#install-mobility-service-manually-by-using-the-gui">Install manually from the UI</a></li>
<li><a href="vmware-azure-install-mobility-service.md#install-mobility-service-manually-at-a-command-prompt">Install manually from a command prompt</a></li>
<li><a href="vmware-azure-install-mobility-service.md#install-mobility-service-by-push-installation-from-azure-site-recovery">Install using the Site Recovery push installation</a></li>
</ul>
<p>All these options are described in details here: <a href="https://docs.microsoft.com/en-us/azure/site-recovery/vmware-azure-install-mobility-service" target="_blank">Install the Mobility service</a></p>
<h3>Next Steps</h3>
<p>Restart the job after ensuring all the pre-requisites as described in previous sections. If you are still facing issues, revisit the Part 1 of the troubleshooting series as mentioned at the beginning of this blog. Let us know in comments below if the issue persists. </p>]]></description>
<link>https://HarvestingClouds.com/post/troubleshooting-azure-site-recovery-asr-data-replication-initiation-issues-part-2</link>
<pubDate>Mon, 08 Jan 2018 00:00:00 +0500</pubDate>
</item>
<item>
<title>Demystifying Azure Security - Creating a Custom Policy - Part 2 - Understanding the Policy Structure</title>
<description><![CDATA[<p>This blog post is part of the <strong>Demystifying Azure Security</strong> series. All posts in the series can be found here: <a href="https://HarvestingClouds.com/post/demystifying-azure-security-series-index/" target="_blank">Demystifying Azure Security - Series Index</a>. Ensure that you have read earlier blogs about the basics of Azure Policies before reading this blog.</p>
<h3>Structure of a Policy JSON Definition</h3>
<p>Policy definitions are written in JSON. The policy definition contains elements for:</p>
<ul>
<li>mode</li>
<li>parameters</li>
<li>display name</li>
<li>description</li>
<li>policy rule
<ul>
<li>logical evaluation</li>
<li>effect</li>
</ul></li>
</ul>
<p>Let's understand these components with the help of an example. Earlier we saw how to find the definition of existing policies. One such policy is: &quot;Allowed locations&quot;. This policy restricts the location in Azure where resources can be deployed. The definition for the policy looks like below:</p>
<pre><code>{
  "properties": {
    "mode": "all",
    "parameters": {
      "allowedLocations": {
        "type": "array",
        "metadata": {
          "description": "The list of locations that can be specified when deploying resources",
          "strongType": "location",
          "displayName": "Allowed locations"
        }
      }
    },
    "displayName": "Allowed locations",
    "description": "This policy enables you to restrict the locations your organization can specify when deploying resources.",
    "policyRule": {
      "if": {
        "not": {
          "field": "location",
          "in": "[parameters('allowedLocations')]"
        }
      },
      "then": {
        "effect": "deny"
      }
    }
  }
}</code></pre>
<ol>
<li><strong>Mode</strong> tells you the type of resources for which the policy will be applied. Allowed values are &quot;All&quot; (where all Resource Groups and Resources are evaluated) and &quot;indexed&quot; (where policy is evaluated only for resources which support tags and location)</li>
<li><strong>Parameters</strong> are used for providing inputs to the policy. They can be reused at multiple locations within the policy. You can refer to a parameter as: &quot;<strong><em>[parameters('allowedLocations')]</em></strong>&quot;</li>
<li><strong>Display Name</strong> is used to show the policy in the portal or programmatically.</li>
<li>A <strong>description</strong> is used to provide details for the policy.</li>
<li><strong>Policy Rule</strong> is which defines the policies. This is the section where the restrictions are defined in a Poicy Definition. To understand a policy you need to focus most of your efforts on this section. Every Policy rule has two key sections defined under &quot;if-then&quot; blocks. 
<ol>
<li><strong>Logical Evaluation</strong> is defined under the &quot;if&quot; block. It defines the condition which is evaluated to determine the policy should be applied or not.</li>
<li><strong>Effect</strong> is defined in the &quot;then&quot; block. This defines what will happen if the condition is met. </li>
</ol></li>
</ol>
<h3>Policy Rules</h3>
<p>Let's look at the policy rules in more details. The general syntax of the &quot;if-then&quot; block is as shown below:</p>
<pre><code>{
  "if": {
    &lt;condition&gt; | &lt;logical operator&gt;
  },
  "then": {
    "effect": "deny | audit | append"
  }
}</code></pre>
<p>Supported logical operators are:</p>
<ul>
<li><code>"not": {condition  or operator}</code></li>
<li><code>"allOf": [{condition or operator},{condition or operator}]</code></li>
<li><code>"anyOf": [{condition or operator},{condition or operator}]</code></li>
</ul>
<p><strong>Not</strong> operator means that the opposit of the condition should be true for the policy to be applied. <strong>AllOf</strong> requires all the conditions defined to be true at the same time. <strong>AnyOf</strong> requires any one of the conditions to be true for the policy to be applied.</p>
<p>In the previous example, the policy rule section is as shown below:</p>
<pre><code>    "policyRule": {
      "if": {
        "not": {
          "field": "location",
          "in": "[parameters('allowedLocations')]"
        }
      },
      "then": {
        "effect": "deny"
      }
    }</code></pre>
<p>In simple terms, the rule above says that if the location is not in the list of allowed locations, defined by the parameter <strong><em>allowedLocations</em></strong>, then the effect will be <strong><em>deny</em></strong> i.e. the resource creation will not be allowed.</p>
<p>You can find the complete list of operators and conditional constructs here: <a href="https://docs.microsoft.com/en-us/azure/azure-policy/policy-definition" target="_blank">Azure Policy definition structure
</a></p>]]></description>
<link>https://HarvestingClouds.com/post/demystifying-azure-security-creating-a-custom-policy-part-2-understanding-the-policy-structure</link>
<pubDate>Sat, 30 Dec 2017 00:00:00 +0500</pubDate>
</item>
<item>
<title>Demystifying Azure Security - Creating a Custom Policy - Part 1 - Viewing Definition of an existing Policy</title>
<description><![CDATA[<p>This blog post is part of the <strong>Demystifying Azure Security</strong> series. All posts in the series can be found here: <a href="https://HarvestingClouds.com/post/demystifying-azure-security-series-index/" target="_blank">Demystifying Azure Security - Series Index</a></p>
<p>Creating a custom policy to enforce your custom requirements in the Azure environment is very easy. This provides you with granular control over what a policy should perform and what should be allowed and what should be denied.</p>
<p>Policies are writing in JSON format. It is always good to base your custom policy definition on one of the built-in policies if one exists which is closer to what you are trying to do. </p>
<h3>Viewing Definition of Existing Policies</h3>
<p>You have two options to view the definition of an existing policy.</p>
<ol>
<li>Using PowerShell</li>
<li>Using Azure Portal</li>
</ol>
<h3>Using PowerShell</h3>
<p><strong>Using PowerShell</strong> run the below cmdlet to view all the policies in your environment.</p>
<pre><code>Get-AzureRmPolicyDefinition</code></pre>
<p>Then go through the list and find the policy that you want to use. Find the ResourceId of that policy and then run the below cmdlet to fetch the details of that policy.</p>
<pre><code>Get-AzureRmPolicyDefinition -Id "/providers/Microsoft.Authorization/policyDefinitions/e56962a6-4747-49cd-b67b-bf8b01975c4c"</code></pre>
<h3>Using Azure Portal</h3>
<p><strong>Using Azure Portal</strong> to view the Definition is also very easy. Simply navigate to:</p>
<ul>
<li>the Subscription section in the Azure Portal</li>
<li>Select your subscription</li>
<li>Click on &quot;Policies&quot; under settings. </li>
<li>Within the Policies blade, click on &quot;Definitions&quot;</li>
</ul>
<p>Within the Policy Definitions, select the Policy from the list for which you want to view the definition. Click on the 3 dots to the right. From the context menu select &quot;View definition&quot;.</p>
<img src="/images/15218187805ab51c9c095ce.png" alt="Policy Definitions">
<p>This will open up another blade. Click on the &quot;Json&quot; tab at the top and this will show you the rule part of the definition of the policy. the rule is the most important part of the policy. We will look at other components of the policies later as well.</p>
<img src="/images/15218187845ab51ca0c313d.png" alt="Policy Definition Details">
<p>E.g. The JSON for &quot;Allowed storage account SKUs&quot; built-in policy looks as shown below.</p>
<pre><code>{
  "if": {
    "allOf": [
      {
        "field": "type",
        "equals": "Microsoft.Storage/storageAccounts"
      },
      {
        "not": {
          "field": "Microsoft.Storage/storageAccounts/sku.name",
          "in": "[parameters('listOfAllowedSKUs')]"
        }
      }
    ]
  },
  "then": {
    "effect": "Deny"
  }
}</code></pre>
<p>Next, we will dissect and understand the Policy structure in details.</p>]]></description>
<link>https://HarvestingClouds.com/post/demystifying-azure-security-creating-a-custom-policy-part-1-viewing-definition-of-an-existing-policy</link>
<pubDate>Fri, 29 Dec 2017 00:00:00 +0500</pubDate>
</item>
<item>
<title>Demystifying Azure Security - Azure Policies - 2 - Assigning a Policy</title>
<description><![CDATA[<p>This blog post is part of the <strong>Demystifying Azure Security</strong> series. All posts in the series can be found here: <a href="https://HarvestingClouds.com/post/demystifying-azure-security-series-index/" target="_blank">Demystifying Azure Security - Series Index</a></p>
<p>In this post, we will view the policies in action. Policy assignment is very easy on the Azure Portal. We will be assigning a built-in policy at a subscription scope.</p>
<h3>Accessing the Policies in the Azure portal</h3>
<p>Begin by accessing the Policies in the Azure Portal. To do this follow the below steps:</p>
<ol>
<li>Navigate to <strong>Subscriptions</strong> (via All Services or the navigation sidebar)</li>
<li>Select the subscription for which you want to view the Policies</li>
<li>Scroll down to the &quot;Settings&quot; category in the menu of the subscription</li>
<li>Click on &quot;<strong>Policies</strong>&quot; to access the policies in Azure</li>
</ol>
<h3>Assigning the Built-in Policies</h3>
<p>To perform the assignment, click on &quot;Assign Policy&quot; from either Compliance or the Assignment tabs.</p>
<img src="/images/15217866825ab49f3a81d6a.png" alt="Assign Policy">
<p>In the new blade, provide the value for the:</p>
<ol>
<li>Policy to be applied</li>
<li>Name and Description of the Policy. The name will be the name of the policy selected by default. As a best practice ensure to provide the detailed description.</li>
<li>Assigned by will be your name by default</li>
<li>You can select the pricing tier between Free and Standard. You will get the compliance evaluation of the resources in your environment against the policy with the Standard pricing tier</li>
<li>Scope for the Policy</li>
<li>Exclusions from the Policy</li>
<li>Any additional parameters related to the policy</li>
</ol>
<p>To select from the policies, click on the blue button with an ellipse (i.e. 3 dots) in front of the Policy box. This will popup another blade for all the Policy definitions.</p>
<img src="/images/15217894075ab4a9df6a23e.png" alt="Assign Policy Blade details">
<p>Scroll through various Built-in policies. Once we define any custom user-defined policies, they will also be displayed here. Select the policy &quot;Allowed locations&quot; from the list of the policies as an example. Click on &quot;Select&quot; once done.</p>
<img src="/images/15217894445ab4aa0494a0d.png" alt="Selecting Policy">
<p>Select the Scope for applying the policy. You can leave the default to the Subscription level. Or you can click on the blue button in front of scope text box and select the Resource Groups under the subscription on which you want to apply the policy.</p>
<p>You can also select the Exclusions if you require. These Resource Group or resources will not be evaluated against the policy.</p>
<img src="/images/15217894615ab4aa15470ec.png" alt="Selecting Scope">
<p>Lastly, you will have additional parameters for the policy related values. These parameters will vary and will depend on the policy you have selected. E.g. For &quot;Allowed locations&quot; policy, you will see the parameter for allowed locations. Select &quot;East US&quot; and &quot;East US 2&quot; for the locations as an example.</p>
<img src="/images/15217894765ab4aa248f15f.png" alt="Providing value for Policy related Parameters">
<p>Once you complete the configurations, click on the &quot;<strong>Assign</strong>&quot; button to apply the policy</p>
<h3>Validating the Policy</h3>
<p>To validate the policy for &quot;Allowed locations&quot; follow these steps:</p>
<ol>
<li>Try to deploy a Storage Account or a VM or any other resource in a location that is NOT allowed. E.g. try deploying a storage account in the &quot;West US&quot; location. This should fail with the validation error stating the policy id.</li>
<li>Perform the same deployment but to one of the allowed location. E.g. try deploying a storage account in the &quot;East US&quot; location. This should succeed without any erros. </li>
</ol>]]></description>
<link>https://HarvestingClouds.com/post/demystifying-azure-security-azure-policies-2-assigning-a-policy</link>
<pubDate>Wed, 27 Dec 2017 00:00:00 +0500</pubDate>
</item>
<item>
<title>Demystifying Azure Security - RBAC Roles</title>
<description><![CDATA[<p>This blog post is part of the <strong>Demystifying Azure Security</strong> series. All posts in the series can be found here: <a href="https://HarvestingClouds.com/post/demystifying-azure-security-series-index/" target="_blank">Demystifying Azure Security - Series Index</a></p>
<p>Azure <strong>RBAC Roles</strong> are a great way to securely provide access to users with limited actions in Azure. The focus is to provide only the access necessary so that an account in your organization doesn't have more access than needed. In case the account gets compromised, this will ensure that does not leave your environment too much vulnerable. This means that if you grant an employee access to manage Virtual Machines in your environment, that employee can't alter the virtual networks by mistake or have any access to delete the storage accounts in the environment. </p>
<h3>Roles Hierarchy</h3>
<p>The RBAC Roles can be assigned at the following three levels in the order of hierarchy:</p>
<ol>
<li><strong>Subscription</strong></li>
<li><strong>Resource Group</strong></li>
<li><strong>Resource</strong></li>
</ol>
<p>Any Resource in Azure, must belong to a Resource Group (under the ARM model). And every Resource Group in Azure must belong to a single subscription.</p>
<p>If a person is assigned scope at Subscription level then he/she will get access to all Resource Groups and to all resources within those resource groups. Next, if a person is assigned access at a Resource Group level then they will automatically get access to all the Resources within that Resource Group. Finally, if a person is provided access only to an individual Resource then they will get access only to that Resource.</p>
<h3>Built-in Roles</h3>
<p>There are various inbuilt roles for this purpose like:</p>
<ol>
<li>Contributor - create and manage but can't grant access to others</li>
<li>Reader - can only view</li>
<li>Owner - full access</li>
</ol>
<p>The complete list of Built-in roles can be viewed here: <a href="https://docs.microsoft.com/en-us/azure/active-directory/role-based-access-built-in-roles" target="_blank">Built-in roles for Azure role-based access control</a></p>
<h3>Viewing and Adding Access</h3>
<p>To view or Add access, first decide the scope where you want to provide the access. Select the scope, e.g. a Resource Group on which you want to view existing access and grant the access to someone. Then follow these steps (as per the image below):</p>
<ol>
<li>Click on &quot;Access Control (IAM)&quot; to access the RBAC access control</li>
<li>View the access in the center area. Scroll down to view all type of roles and Users, Groups or Apps with the access</li>
<li>Click on &quot;+Add&quot; button, as shown below, to add access to a new user, group or application</li>
<li>In the new popup blade, select the Role (e.g. Contributor), Assignment scope and name/email address of the user or app to whom you want to grant the access. You can select multiple users/apps as well.</li>
</ol>
<img src="/images/15216952115ab339eba24e0.png" alt="Viewing and Adding RBAC Access">
<p>Within each subscription, you can grant up to 2000 role assignments.</p>
<p>Learn about creating Custom RBAC Roles here: <a href="https://HarvestingClouds.com/post/demystifying-azure-security-custom-rbac-roles/" target="_blank">Demystifying Azure Security - Custom RBAC Roles</a></p>]]></description>
<link>https://HarvestingClouds.com/post/demystifying-azure-security-rbac-roles</link>
<pubDate>Wed, 20 Dec 2017 00:00:00 +0500</pubDate>
</item>
<item>
<title>Demystifying Azure Security - Azure Policies - 1 - Basics</title>
<description><![CDATA[<p>This blog post is part of the <strong>Demystifying Azure Security</strong> series. All posts in the series can be found here: <a href="https://HarvestingClouds.com/post/demystifying-azure-security-series-index/" target="_blank">Demystifying Azure Security - Series Index</a></p>
<p><strong>Azure Policies</strong> are a great tool to make access to Azure more secure by prohibiting certain operations and enforcing various rules in your environment. The policies make your environment compliant and ensure that you adhere to the service level agreements and standards set within your organization. If the end user tries to deploy a resource violating the policies then the deployment fails at the validation step ensuring that your environment remains compliant. If you already have resources which are not compliant then you will be able to review those and take necessary actions.</p>
<h3>Examples of Azure Policies</h3>
<p>Here are a few of most commonly used policies:</p>
<ol>
<li><strong>Restricting the Allowed locations</strong> where a resource can be deployed. You can restrict the allowed locations where resources can be deployed to the ones that are closer to your geographic location. This also ensures that the users within your organization can't deploy resources to noncompliant locations. </li>
<li><strong>Enforcing a Tag</strong> and it's Value. E.g. You have a Resource Group for the Finance department. You want to ensure that any resource deployed in that Resource Group should be tagged with a &quot;CostCenter&quot; and a specific value for that cost center. You can enforce this using a Policy.</li>
<li><strong>Not Allowed Resource Types</strong>. Using this policy you can prohibit the deployment of certain resource types in your environment. </li>
</ol>
<p>There are various other built-in Policies. You can also create your own policies which we will discuss in more detail later.</p>
<h3>Scope where a Policy is Assigned</h3>
<p>A Policy can be assigned at the below levels:</p>
<ol>
<li><strong>Subscription</strong> level - Applied to all Resource Groups and Resources within the subscription</li>
<li><strong>Resource Group</strong> level - Applied only to the Resources within the selected Resource Groups</li>
</ol>
<p><strong>Exclusions</strong>: In addition to the assignment scope, you can exclude certain Resource Groups or individual Resources from the Policy assignments. The Policy will be applied to all the Resources in the scope except the ones excluded by defining the Exclusions. </p>
<h3>How are Policies different from Role-Based Access Control (RBAC)</h3>
<p>Through <strong>Role-Based Access Control (RBAC)</strong>, you focus on the role and scope of access that a user can have in Azure. You provide the access control (IAM) within Azure for Subscription, Resource Group or an individual Resource. You define a role for the user/app/group like Contributor, Reader etc.  You can use the built-in roles or can define custom roles.</p>
<p>With <strong>Policies</strong>, you focus on the properties of the resources with which the user can work at the defined scope. You define what properties are allowed and what is restricted. You can use the built-in policies or can define your own.</p>
<p>E.g. Through RBAC you define that a particular user has Reader role at the subscription and Contributor role at a particular Resource Group. This will mean that the user can view all the resources in the subscription but can only modify or create resources in the Resource Group where he has Contributor access.</p>
<p>Now if you define a policy which restricts the type of resources deployed ( as discussed in the examples above), the user will not be able to deploy the restricted resource types, even though he has contributor access to a Resource Group.</p>
<h3>Accessing Policies in the Azure portal</h3>
<p>To access the Policies in the Azure portal:</p>
<ol>
<li>Navigate to <strong>Subscriptions</strong> (via All Services or the navigation sidebar)</li>
<li>Select the subscription for which you want to view the Policies</li>
<li>Scroll down to the &quot;Settings&quot; category in the menu of the subscription</li>
<li>Click on &quot;<strong>Policies</strong>&quot; to access the policies in Azure</li>
</ol>
<img src="/images/15217849365ab4986888c88.png" alt="Policies in Azure Portal">
<p>We will be discussing more about the Azure Policies in later blogs.</p>]]></description>
<link>https://HarvestingClouds.com/post/demystifying-azure-security-azure-policies-1-basics</link>
<pubDate>Sun, 10 Dec 2017 00:00:00 +0500</pubDate>
</item>
<item>
<title>Great Way to learn - 2018 Microsoft Azure Community Study Groups</title>
<description><![CDATA[<p>How many times have you started preparation for any Azure certification and have left it midway because of any reason? We all need a little extra nudge. Microsoft Azure Community Study Groups is just the thing you need to create and fulfill your next year resolutions. </p>
<p>Here are the registration links, provided &quot;as is&quot;:</p>
<table border="1" cellspacing="0" cellpadding="0">
<tbody>
<tr>
<td width="461" valign="top">
<p><b>Exam</b></p>
</td>
<td width="158" valign="top">
<p><b>Registration Link</b></p>
</td>
<td width="208" valign="top">
<p><b>Dates</b></p>
</td>
</tr>
<tr>
<td width="461" valign="top">
<p><b>70-532: Developing Microsoft Azure Solutions</b></p>
</td>
<td width="158" valign="top">
<p><a href="https://aka.ms/532asg">https://aka.ms/532asg</a></p>
</td>
<td width="208" valign="top">
<p>March 23 – May 24, 2018</p>
</td>
</tr>
<tr>
<td width="461" valign="top">
<p><b>70-533: Implementing Microsoft Azure Infrastructure Solutions</b></p>
</td>
<td width="158" valign="top">
<p><a href="https://aka.ms/533asg">https://aka.ms/533asg</a></p>
</td>
<td width="208" valign="top">
<p>January 12 – April 13, 2018</p>
</td>
</tr>
<tr>
<td width="461" valign="top">
<p><b>70-535: Architecting Microsoft Azure Solutions </b></p>
</td>
<td width="158" valign="top">
<p><a href="https://aka.ms/535asg">https://aka.ms/535asg</a></p>
</td>
<td width="208" valign="top">
<p>January 12 – May 24, 2018</p>
</td>
</tr>
<tr>
<td width="461" valign="top">
<p><b>70-483: Programing in C#</b></p>
</td>
<td width="158" valign="top">
<p><a href="https://aka.ms/483asg">https://aka.ms/483asg</a></p>
</td>
<td width="208" valign="top">
<p>January 12 – March 2, 2018</p>
</td>
</tr>
<tr>
<td width="461" valign="top">
<p><b>70-486: Developing ASP.NET MVC Web Applications</b></p>
</td>
<td width="158" valign="top">
<p><a href="https://aka.ms/486asg">https://aka.ms/486asg</a></p>
</td>
<td width="208" valign="top">
<p>January 12 – March 23, 2018</p>
</td>
</tr>
<tr>
<td width="461" valign="top">
<p><b>70-487: Developing Microsoft Azure and Web Services</b></p>
</td>
<td width="158" valign="top">
<p><a href="https://aka.ms/487asg">https://aka.ms/487asg</a></p>
</td>
<td width="208" valign="top">
<p>March 9 – May 11, 2018</p>
</td>
</tr>
</tbody>
</table>
<p>You can access the main blog from here:
<a href="https://blogs.technet.microsoft.com/uspartner_ts2team/2017/12/04/2018-microsoft-azure-community-study-groups/" target="_blank">2018 Microsoft Azure Community Study Groups</a></p>]]></description>
<link>https://HarvestingClouds.com/post/great-way-to-learn-2018-microsoft-azure-community-study-groups</link>
<pubDate>Tue, 05 Dec 2017 00:00:00 +0500</pubDate>
</item>
<item>
<title>Introducing the Harvesting Clouds YouTube channel</title>
<description><![CDATA[<p>I am happy to introduce the YouTube channel for Harvesting Clouds. The channel can be accessed here: <a href="https://www.youtube.com/channel/UCIfsXMJ8HMJkx1lMP-lngzw" target="_blank">Harvesting Clouds YouTube Channel</a></p>
<p>Do <strong>Subscribe</strong> to the channel to get notified of any new content on the channel. <strong>Like</strong> the content if you learned something new or got new perspective on the things you knew.</p>
<p>Check out the author introduction video below:</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/HxudnbeFofk?rel=0" frameborder="0" allowfullscreen></iframe>
<p>The managed and ordered content via <strong>playlists</strong> can be found here: <a href="https://www.youtube.com/channel/UCIfsXMJ8HMJkx1lMP-lngzw/playlists" target="_blank">Playlists - Harvesting Clouds</a></p>
<p><strong>All videos</strong> on the channel can be found here: <a href="https://www.youtube.com/channel/UCIfsXMJ8HMJkx1lMP-lngzw/videos" target="_blank">All Videos - Harvesting Clouds</a></p>]]></description>
<link>https://HarvestingClouds.com/post/introducing-the-harvesting-clouds-youtube-channel</link>
<pubDate>Wed, 29 Nov 2017 00:00:00 +0500</pubDate>
</item>
<item>
<title>Video - Azure Automation - Introduction</title>
<description><![CDATA[<p>This is the first video in the series of Azure Automation. This video provides the introduction and talks about the various theoretical concepts about Azure Automation</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/8tLdzsg6YVk?rel=0" frameborder="0" allowfullscreen></iframe>
<p>You can view and subscribe to the YouTube channel here: <a href="https://www.youtube.com/channel/UCIfsXMJ8HMJkx1lMP-lngzw" target="_blank">HarvestingClouds on YouTube</a></p>]]></description>
<link>https://HarvestingClouds.com/post/video-azure-automation-introduction</link>
<pubDate>Wed, 29 Nov 2017 00:00:00 +0500</pubDate>
</item>
<item>
<title>Introducing Azure Reserved VM Instances (RIs)</title>
<description><![CDATA[<p>Azure Reserved VM Instances provides an easy option to save cost for predictive workloads. You commit to either one or three year options and pay the complete cost with discounts upfront. </p>
<h2>When should you use this</h2>
<p>If you know that your workload will be up and running then Azure Reserved VM instances are for you.
The savings can be up to 72% with the use of Reserved VM instances. If your VM is not going to be up and running for most time and you have the option to automate and shut down the Virtual Machine then you don't need Reserved VM Instances.</p>
<h2>How does the cost savings look like</h2>
<p>The cost savings can be very significant especially when clubbed with Azure Hybrid Use Benefit. The below graphic is based on a Dv2 three-year RI with Azure Hybrid Benefit.</p>
<p><img src="https://HarvestingClouds.com/images/15120755665a20712ed8b1b.png" alt="Potential Savings with Azure Reserved VM Instances" /></p>
<h2>Other aspects</h2>
<p>Other aspects of Azure Reserved VM Instances that you need to know are:</p>
<ul>
<li>You can assign RI benefit at either the enrollment level or at the subscription level</li>
<li>The assignment is as easy as providing the Region, VM Series/size and providing the term. The two terms offered today are 1 year and 3 years</li>
<li>You can exchange to a new instance and location as you need in the future</li>
<li>You also have the option to cancel anytime directly with Microsoft for a pro-rated refund</li>
</ul>
<p>Reference: <a href="https://azure.microsoft.com/en-ca/pricing/reserved-vm-instances/" target="_blank">Azure Reserved VM Instances</a></p>]]></description>
<link>https://HarvestingClouds.com/post/introducing-azure-reserved-vm-instances-ris</link>
<pubDate>Sun, 12 Nov 2017 00:00:00 +0500</pubDate>
</item>
<item>
<title>Why you should be using Azure Managed Disks now?</title>
<description><![CDATA[<p>Azure Managed Disks bring lots of advantages and benefits to the table when compared to Azure Storage Account based VM disks. The first thing to note is that these are only related to VM disks and not general blob storage. In this post, lets take a look at what all benefits you get when you create your virtual machine with a managed disk instead of a storage account based disks.</p>
<h3>More Controlled Access Management</h3>
<p>Let's assume a scenario where all the disks for the virtual machines in your environment belonged to a particular storage account. Let us say that there are VMs belonging to Finance as well as HR departments. Now if you want to give access to a VHD file of a VM belonging to HR department, you will provide the access to the storage account. This was the lowest level where you could provide the access. This inadvertently opened the access to the Finance VM's VHD files as well.</p>
<p>Managed Disks are individual resources in Azure. If a VM has 1 OS disk and 2 data disks, all implemented as a managed disk, then you can even provide the access to one of the data disk and not provide access to any of the other disks.</p>
<h3>No Storage account service limits</h3>
<p>Earlier with storage accounts there were Service Limits related to IOPS at the storage account level. When the infrastructure grew and there comes a time the number of disks grew to a point that this service limit will be hit and this can affect your architecture. With managed disks, you are no longer limited by the storage account limits.</p>
<h3>Ability to take Snapshot</h3>
<p>Now with managed disks you have the capability to take snapshots on the fly. You can later restore from these snapshots as required. You can take these snapshots onto a different storage account.</p>
<h3>Ability to Capture better images</h3>
<p>The images that you capture on the Vms, which are created using managed disks, will not just include the OS disk, but will also include all the data disk.</p>
<h3>Ability to convert a Standard disk to Premium disk and vice versa</h3>
<p>Earlier if you wanted to convert a standard disk to a premium disk (or vice versa) you needed to create a new storage account and copy over the disk. Now with managed disks, this is as easy as shutting down the virtual machine and just changing a value in a drop down.</p>
<h3>Other benefits</h3>
<p>Other benefits include:</p>
<ul>
<li>Better reliability for Availability Sets</li>
<li>Highly durable and available with design for 99.999% availability</li>
<li>Better Azure Backup service support with the ability to create a backup job with time-based backups, easy VM restoration, and backup retention policies</li>
</ul>
<p>Reference: </p>
<ul>
<li><a href="https://azure.microsoft.com/en-ca/services/managed-disks/" target="_blank">Managed Disks product page</a></li>
<li><a href="https://docs.microsoft.com/en-us/azure/virtual-machines/windows/managed-disks-overview" target="_blank">Managed Disks overview</a></li>
</ul>]]></description>
<link>https://HarvestingClouds.com/post/why-you-should-be-using-azure-managed-disks-now</link>
<pubDate>Sun, 05 Nov 2017 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Cloud Shell - Embedding Cloud Shell in your Websites</title>
<description><![CDATA[<p>Embedding Azure Cloud Shell in your websites gives your visitors an option to directly interact with Azure, without even leaving your website. Your website may be talking about a code sample that your visitor may want to try in Azure. Instead of opening a new instance of Azure Portal, the visitors can try the command right from your site if you have embedded the cloud shell in your website.</p>
<p>All you need is the embed code and you are all set. Just add this code to your site and then you will be able to launch the cloud shell right from there with just a click of a button.Behind the scene, this is nothing but hyperlinking to the URL: <a href="https://shell.azure.com">https://shell.azure.com</a></p>
<h3>Embed Code</h3>
<p>The embed code is as follows if you are using Markdown</p>
<pre><code>[![Launch Cloud Shell](https://shell.azure.com/images/launchcloudshell.png "Launch Cloud Shell")](https://shell.azure.com)</code></pre>
<p>If you are building a website in HTML then the embed code will look like:</p>
<pre><code>&lt;a style="cursor:pointer" onclick='javascript:window.open("https://shell.azure.com", "_blank", "toolbar=no,scrollbars=yes,resizable=yes,menubar=no,location=no,status=no")'&gt;&lt;image src="https://shell.azure.com/images/launchcloudshell.png" /&gt;&lt;/a&gt;</code></pre>
<h3>Code in Action</h3>
<p>Clicking on below image will open a popup. It will navigate you to the <a href="https://shell.azure.com">https://shell.azure.com</a> URL within the popup. You can login to Azure and try any commands. I encourage to inspect the link below and compare with the above embed code.</p>
<p><a style="cursor:pointer" onclick='javascript:window.open("https://shell.azure.com", "_blank", "toolbar=no,scrollbars=yes,resizable=yes,menubar=no,location=no,status=no")'><image src="https://shell.azure.com/images/launchcloudshell.png" /></a></p>
<h3>Choosing Shell Type</h3>
<p>The above examples will open the most recently used cloud shell. If you want to encourage visitors to experience a specific cloud shell, then update the URL section above embed code. Update as below by adding the last part to the URLs:</p>
<ul>
<li>Bash Shell - <a href="https://shell.azure.com/bash">https://shell.azure.com/bash</a></li>
<li>PowerShell Shell - <a href="https://shell.azure.com/powershell">https://shell.azure.com/powershell</a></li>
</ul>
<h3>Reference:</h3>
<ul>
<li><a href="https://docs.microsoft.com/en-us/azure/cloud-shell/embed-cloud-shell" target="_blank">Embed Azure Cloud Shell</a></li>
</ul>]]></description>
<link>https://HarvestingClouds.com/post/azure-cloud-shell-embedding-cloud-shell-in-your-websites</link>
<pubDate>Mon, 16 Oct 2017 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Cloud Shell - Introduction</title>
<description><![CDATA[<p><strong>Azure Cloud Shell</strong> is a very convenient option to run your automation scripts against your subscriptions. </p>
<p>The shell is provided in two flavors:</p>
<ul>
<li>Bash based shell</li>
<li>PowerShell in Cloud Shell (currently in Preview)</li>
</ul>
<p>The Cloud Shell is a Browser based experience. What this means is that you do not need to install any dependencies for the Azure commands to run. Also, there is no dependency on your machine's hardware. You can work irrespective of your local machine's configurations.</p>
<h3>Support for various tools</h3>
<p>Azure Cloud Shell comes preloaded with various tools depending upon the shell type selected. It comes loaded with Linux tools like bash, sh, tmux and dig. The Azure tools which are preloaded are Azure CLI 2.0 and 1.0, AzCopy,etc.Text editors include vim and nano. Git source control tools are also included. Various other Build, Containers, Databases, etc. tools are also included. It also comes with language support for .net version 2.0.0, Java version 1.8, Node.js 6.9.4, PowerShell 6.0(beta) and Python 2.7 and 3.5 etc. Check the References section for the full list and up to date version of Tools and Language support.</p>
<h3>Key Concepts</h3>
<p>Below are some of the key concepts (in nutshell) about the cloud shell.</p>
<ol>
<li>You can opt for any one of the two shell options available, as per your preferences. You can choose to have a <strong>Bash shell</strong> or <strong>PowerShell in Cloud shell</strong></li>
<li>You can easily switch between the two shells at any time</li>
<li>If you are logged into Azure portal then you are also automatically authenticated into the Azure Cloud Shell</li>
</ol>
<p>Other important concepts as per Microsoft documentation are:</p>
<ol>
<li>Cloud Shell runs on a temporary host provided on a per-session,
per-user basis </li>
<li>Cloud Shell times out after 20 minutes without
interactive activity </li>
<li>Cloud Shell requires an Azure file share to be
mounted </li>
<li>Cloud Shell uses the same Azure file share for both Bash and
PowerShell </li>
<li>Cloud Shell is assigned one machine per user account </li>
<li>Bash persists $Home using a 5-GB image held in your file share</li>
<li>Permissions are set as a regular Linux user in Bash</li>
</ol>
<h3>Pricing</h3>
<p>Cloud Shell does not have any direct cost linked to it. The charge for Cloud Shell is only for the usage of the underlying Azure Storage. The exact charge depends on the amount you store on that storage and the type of the storage. By default, Locally Redundant Storage is created. </p>
<p><strong>References</strong>:</p>
<ul>
<li><a href="https://docs.microsoft.com/en-us/azure/cloud-shell/overview" target="_blank">Azure Cloud Shell Overview</a></li>
<li><a href="https://docs.microsoft.com/en-us/azure/cloud-shell/features" target="_blank">Azure Cloud Shell Features and Tools</a></li>
<li><a href="https://docs.microsoft.com/en-us/azure/cloud-shell/features-powershell" target="_blank">Features &amp; tools for PowerShell in Azure Cloud Shell</a></li>
</ul>]]></description>
<link>https://HarvestingClouds.com/post/azure-cloud-shell-introduction</link>
<pubDate>Sun, 08 Oct 2017 00:00:00 +0500</pubDate>
</item>
<item>
<title>Script Sample - Generate Azure Resources Report by Tags</title>
<description><![CDATA[<p>&lt;&lt;<strong>Update</strong>: This post and the sample script is now updated to support latest Azure PowerShell cmdlets&gt;&gt;</p>
<p>When managing resources in Azure, Tags are there to help you. They add very valuable metadata to the Azure resources.
In nutshell, tags are Key-Value pairs. E.g. &quot;Business Unit = Finance&quot;, &quot;Site = Central US&quot; are two such tags.</p>
<p>These <strong>tags help you to</strong>:</p>
<ul>
<li>Organize your resources and manage the same</li>
<li>Get insights into Chargeback categorically</li>
</ul>
<p>Tags go beyond the boundaries of deployments. You can have few resources deployed in one resource group and few other resources into the second resource group. If for these resources in both the resource groups you apply the same tag (i.e. same Key and value combination) then you can view and manage these resources in a single click.</p>
<p>Now once in a while, you want to take a health check of your Azure environment. You want to see what all resources are there and what are the tags applied to these resources. You want to extract this data to a CSV file so that you can apply filters and perform other business intellegence (BI) operations on it. The script below provides exactly that. </p>
<p>The <strong>script gives you</strong> a CSV output report with:</p>
<ul>
<li>All the resources in your Azure Subscription</li>
<li>Type of each resource, so that you can filter on various types</li>
<li>Tags for each of the resource in Azure</li>
</ul>
<p><strong>The Columns</strong> in the Output CSV file (generated by the script) are:</p>
<ol>
<li>Semi-colon separated list of tags</li>
<li>Resource Name</li>
<li>Resource Group Name</li>
<li>Location</li>
<li>Resource Type</li>
<li>Resource Id</li>
<li>Name</li>
<li>Subscription Id</li>
</ol>
<p>You can find this script on GitHub here: <a href="https://github.com/HarvestingClouds/PowerShellSamples/blob/a4eb910aa8eb2cdd340c2866cde150282b47067e/Scripts/Azure%20Resources%20Report%20by%20Tags.ps1">Azure Resources Report by Tags</a></p>
<p><a href="https://raw.githubusercontent.com/HarvestingClouds/PowerShellSamples/master/Scripts/Get-AzureRmTagsReport.ps1">Direct Link to the Script here. Right click and choose Save As</a></p>
<p>You are welcome to make changes and submit Pull Requests to this script or even fork and make your modifications. </p>]]></description>
<link>https://HarvestingClouds.com/post/script-sample-generate-azure-resources-report-by-tags</link>
<pubDate>Thu, 26 Jan 2017 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Application Insights is now generally available</title>
<description><![CDATA[<p>After a long time in Preview, Azure Application Insights is now generally available from Microsoft.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-application-insights-is-now-generally-available</link>
<pubDate>Tue, 29 Nov 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Site Recovery (ASR) - New feature added to target Resource Groups</title>
<description><![CDATA[<p>A new feature is now added to Azure Site Recovery (ASR) which enables you to target the Resource Groups for failover. You can target a separate Resource Group for each Server/VM being protected. Now post failover, this target Resource Group will be used. Earlier,  during a failover, a new resource group was created with the same name as the server being failed over.</p>
<p>You can configure this setting in two ways:</p>
<ol>

<li>
When you are <b>Enabling Replication of new servers</b> then at the settings for "Target" you will have two new options. 

<ul><li>The first option is "Post-failover resource group" along with a drop down. This is where you select the target Resource Group. You can select one of the existing Resource group here. If you haven't created the one you want to use as a target then please create before enabling replication. </li> 

<li>The second new option is to select "Post-failover deployment model" which has only two options. These options are "Classic" and "Resource Manager". The later will be selected by default and should be the one you should be using to leverage all the new features which Azure has to offer.</li>
</ul>

<br/>
<img src="/images/1479510199582f88b742692.png" alt="Enable Replication">
</li>

<li>
If you have servers which were protected before this feature was introduced or if you configured wrong Resource Group during enabling of replication and now want to change the Resource Group then you can do so using this alternate way. Go to your ASR Vault and then go to Settings. Then go to the "Replicated Items" and click on the server for which you want to change the resource group. Click on "Compute and Network" in the server properties as shown below. You will see a new option here to select or change the target Resource Group. Click "Save" at the top of the blade after selecting or changing the value.

<br/>

<img src="/images/1479493236582f4674a80d3.png" alt="Compute and Network settings">
</li>
</ol>
<p>This new option makes the failover process in Azure Site Recovery (ASR) much more manageable and overall a better experience.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-site-recovery-asr-new-feature-added-to-target-resource-groups</link>
<pubDate>Thu, 17 Nov 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Troubleshooting Azure Site Recovery (ASR) - Data Replication Not Working</title>
<description><![CDATA[<p>If you have the Azure Site Recovery (ASR) setup in your environment and are facing the issue where the data replication is stuck, then follow this blog to troubleshoot. The data replication can be stuck either during the initial replication or during the delta changes. This can occur for various reasons. We will inspect various components involved in ASR. Major of the troubleshooting is done on the Management Server i.e. the on-premise Configuration/Master target server.</p>
<h3>1. Check Alerts Details</h3>
<p>Go to the Azure Site Recovery Vault and navigate to the settings. Click on the <strong><em>Alerts and Events</em></strong>. Check the alerts for the data replication being blocked etc. Verify that the problem is related to data replication and not something else. </p>
<p>You can also navigate to the &quot;<strong><em>Replicated Items</em></strong>&quot; in the ASR Vault settings. On the blade for replicated items, click on the server for which the data is not being replicated. A new blade will open for this server's properties Then click on the Error Details on the server properties blade's context menu (which you can access by clicking on the top right ellipse i.e. 3 dots).</p>
<p>After verifying the issue, proceed to next sections to troubleshoot.</p>
<h3>2. Check Resource Monitor</h3>
<p>Check if you see any activity in the Resource Monitor. This is also to validate if the issue is there or not. Sometimes the <strong>Low Bandwidth</strong> and <strong>multiple servers</strong> configured against one Management server can cause this issue. Ensure that this is not the scenario in your case.</p>
<p>From the Task manager, go to performance view and check for the bandwidth consumption. Then click on the &quot;Open Resource Monitor&quot; button to launch the Resource Monitor. From the CPU section in the Overview tab, select the below two services:</p>
<ul>
<li>cxps.exe</li>
<li>cbengine.exe</li>
</ul>
<p>Then click on the Network tab and see if there is any traffic going out to Azure. If the data transfer is going on without issues then you should be able to view entries against cbengine going out to a URL which will look something like &quot;blob.aaa1aaa1aa.core.windows.net&quot; and entries against the csps service .</p>
<p><img src="/images/1479242042582b713a13d26.png" alt="Resource Monitor" /></p>
<h3>3. Check ASR Infrastructure Setup</h3>
<p>First of all, check if the ASR infrastructure setup is correct and nothing is wrong there. To view this, navigate to the ASR Vault. Go to the settings and click on the &quot;Site Recovery Infrastructure&quot;. In the next blade, click on the kind of infrastructure you have setup. E.g. If you are replicating from VMWare or Physical Machines from on-premise to Azure then click on the &quot;Configuration Servers&quot; under the &quot;For VMWare &amp; Physical Machines&quot; section.</p>
<p><img src="/images/1479235374582b572e7bd28.png" alt="Site Recovery Infrastructure" /></p>
<p>Here check if the Config Server is showing as &quot;Connected&quot;. If not then the problem is in the communication between Configuration Server and the Azure. Ensure that you are able to connect to the Azure portal from the config server. Also, ensure that all the public URLs for Azure are accessible. Check this link for exact URLs: <a href="https://docs.microsoft.com/en-us/azure/site-recovery/site-recovery-best-practices#verify-url-access">Verify URL Access</a>.</p>
<p>Next, click on the configuration server. This will open another blade with details for the configuration server. Expand the section for &quot;Associated Servers&quot; as marked no. 2 in the screenshot below. Check if all the associated servers, i.e. Process Server, vCenter Server and Master Target servers are connected and showing green tick mark.</p>
<p>Next, check the configuration server health as shown at no. 3 below. Check if all the services are running and showing healthy. Ensure that you have sufficient free space on the configuration server to send the replication data. If you see any services not running then go to the next section to check and start the services on the Management Server on-premise. </p>
<p>You can try refreshing the server after making any configuration changes on it, e.g. increasing memory or freeing up disk space. Click on the &quot;Refresh Server&quot; button as shown at no. 4, at the top of the blade for the configuration server.</p>
<p><img src="/images/1479235804582b58dc26f55.png" alt="Config Server Settings" /></p>
<h3>4. Checking Services on the Management Server</h3>
<p>Check if the services on the Management Server are up and running. You need to check for the below services:</p>
<ul>
<li>InMage PushInstall</li>
<li>InMage Scout Application Service</li>
<li>InMage Scout VX Agent - Sentinel/Outpost</li>
<li>INMAGE-AppScheduler</li>
<li>Microsoft Azure Recovery Services Agent</li>
<li>Microsoft Azure Site Recovery Service</li>
<li>cxprocessserver (This is important service. It is the service for the InMage CX Process Server)</li>
<li>tmansvc (This is the service for the InMage Volsync Thread Manager Service)</li>
</ul>
<p>Start any service which is not running and check if the problem still exists. 90% of the time the problem is going to be because of something related to these services (e.g. a restart or patch stopped one of these services).</p>
<h3>5. Checking Services on the Server being replicated</h3>
<p>Check if the services on the Server being replicated are up and running. You need to check for the below services:</p>
<ul>
<li>Azure Site Recovery VSS Provider</li>
<li>InMage Scout Application Service</li>
<li>InMage Scout VX Agent - Sentinel/Outpost</li>
</ul>
<p>Start any service which is not running and check if the problem still exists.</p>
<h3>6. Verify Service Account credentials are correct and have required access</h3>
<p>The replication can stop if the service account is not correct or it doesn't have required access. Check if the service account's password expired or changed. </p>
<p>You can use the Configuration Server config tool to check and update the service accounts. This tool can be accessed from this directory path: &quot;<em>D:\Program Files (x86)\Microsoft Azure Site Recovery\home\svsystems\bin</em>&quot; where D is your install directory for ASR setup. The tool name under this directory is &quot;<strong>cspsconfigtool.exe</strong>&quot;.</p>
<p><img src="/images/1479238579582b63b3688bd.png" alt="CSPS Config Tool" /></p>
<h3>7. Check Logs</h3>
<p>There are various ASR logs that gets generated in the Management server. Two key logs that you should check are as shown below. This assumes that D is the directory where ASR is installed.</p>
<ul>
<li><strong>Monitoring Logs</strong> - These logs are located at &quot;<em>D:\Program Files (x86)\Microsoft Azure Site Recovery\home\svsystems\var</em>&quot;. Name of the file you should check is &quot;<strong>monitor_ps</strong>&quot;.</li>
<li><strong>VM-Specific ASR Logs</strong> - These logs are located at &quot;<em>D:\Program Files (x86)\Microsoft Azure Site Recovery\home\svsystems</em>&quot;. Then there will be a folder with the name as a GUID for each VM. Navigate to the folder with the Guid and try to find the folder for your VM's GUID. One indication will be the number of disks and the disk sizes. Once you have located the folder for one of the VM having replication problems. Then navigate to internal folders and locate the perf.log file for your VM's disks. Check to see if there are any errors here.</li>
</ul>
<p>These logs should give you an idea as to what may have been causing the issues.</p>
<h3>In Conclusion</h3>
<p>After all these steps and any changes you should Refresh the Configuration Server as shown in the point 3 above. </p>
<p>Let me know if this blog helped in your scenario. </p>]]></description>
<link>https://HarvestingClouds.com/post/troubleshooting-azure-site-recovery-asr-data-replication-not-working</link>
<pubDate>Mon, 14 Nov 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>General Availability - Azure Active Directory (AD) Domain Services</title>
<description><![CDATA[<p><strong>Azure Active Directory Domain Services</strong> are here to revolutionize the way Domain Services are used. They are here to reduce even further the infrastructure management for IT Administrators. </p>
<h3>What is this service</h3>
<p>Using this service you can now setup domains without having to setup the domain controllers. You can set up your Domain in Azure using Domain Services and then you can have virtual machines joining to that domain. At no point, you need to set up any domain controllers. You can also use Group Policy with this service to securely administer your domain joined infrastructure.</p>
<p>This service benefits all customers. For <strong>enterprise customers</strong>, they get the familiar enterprise grade service. For <strong>medium to small business customers</strong>, the service makes even more sense as they get enterprise level service for a smaller price due to small infrastructure and thus a small number of objects in AD.</p>
<p>This service is out of the box a highly available service. It is hosted in globally distributed datacenters.</p>
<h3>When is this service available</h3>
<p>This service is now <strong>Generally Available</strong>. The pricing for this service will start from 1st December 2016.</p>
<p>The payment model is Pay-As-You-Go. The usage will be charged per hour. The chargeback will be based on the total number of AD Objects in your AD Tenant. These objects include users, groups, and domain-joined computers. Directory size and hours are calculated and emitted daily. Usage is prorated to the minute.</p>
<p>Currently, there are 3 tiers. </p>
<ul>
<li>Less than 25,000 directory objects</li>
<li>25,001 to 100,000 directory objects</li>
<li>more than 100,000 directory objects</li>
</ul>
<h3>Is this service available in my region</h3>
<p>At the time of writing of this blog post, this service is available in the following regions:</p>
<ul>
<li>East US</li>
<li>East US 2</li>
<li>Central US</li>
<li>South Central US</li>
<li>West US</li>
</ul>
<p>Check this link for most updated availability: <a href="https://azure.microsoft.com/en-us/regions/services/">Azure Products by Region</a></p>
<p>Also, check the official ADDS page here: <a href="https://azure.microsoft.com/en-us/services/active-directory-ds/">Azure Active Directory Domain Services</a>
And, check the upto date pricing here: <a href="https://azure.microsoft.com/en-us/pricing/details/active-directory-ds/">Pricing</a></p>
<p>Now it is time for you to go and try this new service for yourself and enjoy it's benefits!</p>]]></description>
<link>https://HarvestingClouds.com/post/general-availability-azure-active-directory-ad-domain-services</link>
<pubDate>Thu, 27 Oct 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Suspending and Resuming Azure Site Recovery (ASR) Replication on a single or multiple servers</title>
<description><![CDATA[<p>Let us assume that you have enabled the Azure Site Recovery (ASR) replication on various servers. These servers can be:</p>
<ul>
<li>On Premise VMWare VMs</li>
<li>On Premise Physical Servers</li>
<li>Azure ASM (older portal) VMs</li>
</ul>
<p>The purpose could be anything from setting up Disaster Recovery for your infrastructure or using ASR for Migrating workloads from on-premise to Azure. For any reason, you may need to suspend and resume ASR replication on one or more target servers.</p>
<p>Currently, ASR does not have the feature to allow you to suspend and resume the ASR replication. But you can do this manually as easily. </p>
<p>To <strong>Suspend</strong> the ASR replication on a particular server, all you need to do is:</p>
<ol>
<li>Log into the server on which ASR replication is currently going on and you want to suspend the replication.</li>
<li>Open the Services (Run -&gt; services.msc)</li>
<li>Locate the following services and Stop these services.
<ul>
<li>Azure Site Recovery VSS Provider</li>
<li>InMage Scout Application Service</li>
<li>InMage Scout VX Agent - Sentinel/Outpost</li>
</ul></li>
</ol>
<p>Checkout these servers below:</p>
<p><img src="/images/1477354219580ea2eb7f4e6.png" alt="Services 1" /></p>
<p><img src="/images/1477354229580ea2f52b728.png" alt="Services 2" /></p>
<p>To <strong>Resume</strong> the ASR replication, just do the opposite, i.e. Log into the server and Start these services. </p>
<p>Until Azure adds this feature directly in the portal, this easy manual step is the workaround for suspending and resuming the replication on servers.</p>]]></description>
<link>https://HarvestingClouds.com/post/suspending-and-resuming-azure-site-recovery-asr-replication-on-a-single-or-multiple-servers</link>
<pubDate>Tue, 25 Oct 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Step by Step ARM Templates - Creating ASR Template from an existing Azure Infrastructure and Modifying It</title>
<description><![CDATA[<p><strong>Index</strong> of all blogs in this <strong>Step by Step ARM Templates series</strong> is located here: <a href="/post/step-by-step-azure-resource-manager-arm-templates-index/">Step by Step Azure Resource Manager (ARM) Templates - Index</a></p>
<p>This blog post is for you if:</p>
<ul>
<li>You want to backup an Infrastructure configuration/setup in Azure and want to redeploy it to another environment then this blog is for you. </li>
<li>You want to create similar infrastructure as one of existing deployments in Azure</li>
<li>You want to modify the configurations of existing Azure IaaS infrastructure and redeploy various elements</li>
</ul>
<p>This <strong>Power Tip</strong> is really easy if you know just the option. </p>
<ol>
<li>If you want to make the template for all the resources in a Resource Group in Azure, then go to the properties of the Resource Group and find the option for &quot;<strong>Automation Script</strong>&quot;.</li>
<li>If you want to get the template only for a particular resource, then navigate to that resource in the Azure Portal and then open it's settings. You will find the same &quot;<strong>Automation Script</strong>&quot; option. </li>
</ol>
<p>You can check this option in the below screenshot.</p>
<p><img src="/images/1477349409580e902123ab1.png" alt="Automation Script" /></p>
<p>Once you click on the Automation Script option in the settings (of a resource group or a resource) then you will be presented with the complete JSON template along with the JSON outline on the right side (marked 2 above in the image).</p>
<p>You have various options for the actions to take on the template (marked 3 in the image above):</p>
<ul>
<li>You can download the template</li>
<li>Add to Library to deploy the same resources again and again in your subscription</li>
<li>To directly deploy the resources again with the modifications you make. </li>
</ul>
<p>Normally, you would download the template to make edits to the same. After downloading, you should start cleaning up the template. There are only 4 major tasks that you need to perform as part of the cleanup:</p>
<ol>
<li>Remove any <strong>hard-coded values</strong> for various dependent resources e.g. NIC for a VM, VHD for a VM etc.</li>
<li>Remove any resources and dependent parameters that you don't need.</li>
<li>Create <strong>Parameters</strong> for the values you want to change for each deployment and want the end user to provide during the deployment.</li>
<li>Create <strong>Variables</strong> for the values which can have fixed values but are being used at multiple locations in your template.</li>
</ol>
<p>That's all there is to it. Using this tip you can spearhead your ARM Template developments. You don't need to start from scratch and can base your templates on the existing deployments.</p>]]></description>
<link>https://HarvestingClouds.com/post/step-by-step-arm-templates-creating-asr-template-from-an-existing-azure-infrastructure-and-modifying-it</link>
<pubDate>Mon, 24 Oct 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Step by Step Azure Resource Manager (ARM) Templates - Index</title>
<description><![CDATA[<p><strong>Azure Resource Manager (ARM) Template</strong> is a JavaScript Object Notation (JSON) file that defines one or more resources to deploy to a resource group. It also defines the dependencies between the deployed resources. The template can be used to deploy the resources consistently and repeatedly.</p>
<p>ARM Templates can be used for the deployment of resources on both Azure and Azure Stack. Using these templates for all deployments provides you with various <strong>benefits</strong> including:</p>
<ul>
<li><strong>Declarative Deployment</strong> – All you need to do is declare what you need to deploy. You don't need to create any complex rules or write lengthy scripts.</li>
<li><strong>Idempotency</strong> – You can deploy the same template over and over again without affecting the current resources.</li>
<li><strong>Predictability</strong> - Using Templates you can have accurate predictability when performing large deployments. You reduce any manual errors.</li>
<li><strong>Repitition without Errors</strong> - You can deploy the same infrastructure over and over again (e.g. in Dev-test environments and then in production).</li>
</ul>
<p>This series of posts try to decode and understand the ARM Templates &quot;Step By Step&quot;.</p>
<p>This post is an index of all the posts, in sequence, for understanding the Azure Resource Manager (ARM) Templates. This post will be updated regularly as more posts on this topic are added.</p>
<ol>
<li><a href="#">Index</a> </li>
<li><a href="/post/step-by-step-arm-templates-json-101-for-it-administrators/">JSON 101 for IT Administrators</a></li>
<li><a href="/post/step-by-step-arm-templates-what-is-in-an-arm-template-understanding-all-components/">What is in an ARM Template - Understanding All Components</a></li>
<li><a href="/post/step-by-step-arm-templates-what-is-in-an-arm-template-understanding-components-2-parameters/">What is in an ARM Template - Understanding Components 2 - Parameters</a></li>
<li><a href="/post/step-by-step-arm-templates-what-is-in-an-arm-template-understanding-components-3-variables/">What is in an ARM Template - Understanding Components 3 - Variables</a></li>
<li><a href="/post/step-by-step-arm-templates-what-is-in-an-arm-template-understanding-components-4-resources/">What is in an ARM Template - Understanding Components 4 - Resources</a></li>
<li><a href="/post/step-by-step-arm-templates-what-is-in-an-arm-template-understanding-components-5-outputs/">What is in an ARM Template - Understanding Components 5 - Outputs</a></li>
<li><a href="/post/step-by-step-arm-templates-helper-functions/">Helper Functions in ARM Templates</a></li>
<li><a href="/post/step-by-step-arm-templates-building-your-first-arm-template/">Building your first ARM Template</a></li>
<li><a href="/post/step-by-step-arm-templates-deploying-template-using-azure-portal/">Deploying Template Using Azure Portal</a></li>
<li><a href="/post/step-by-step-arm-templates-deploying-template-using-azure-powershell/">Deploying Template Using Azure PowerShell</a></li>
<li><a href="/post/step-by-step-arm-templates-creating-parameters-file-for-an-arm-template/">Creating Parameters file for an ARM Template</a></li>
<li><a href="/post/step-by-step-arm-templates-authoring-arm-templates-using-visual-studio/">Authoring ARM Templates using Visual Studio</a></li>
<li><a href="/post/step-by-step-arm-templates-deploying-arm-templates-using-visual-studio/">Deploying ARM Templates using Visual Studio</a></li>
<li><a href="/post/step-by-step-arm-templates-iterating-and-creating-multiple-instances-of-a-resource/">Iterating and creating multiple instances of a resource</a></li>
<li><a href="/post/step-by-step-arm-templates-visualizing-arm-templates-and-generating-diagrams/">Visualizing ARM Templates and Generating Diagrams</a></li>
<li><a href="/post/step-by-step-arm-templates-using-key-vault-to-securely-provide-information-in-arm-templates/">Using Key Vault to Securely Provide Information in ARM Templates</a></li>
<li><a href="/post/step-by-step-arm-templates-providing-powershell-scripts-to-run-after-vm-deployment-via-arm-template/">Providing PowerShell Scripts to Run after VM deployment via ARM Template</a></li>
<li><a href="/post/step-by-step-arm-templates-deploying-a-windows-vm-with-oms-integration/">Deploying a Windows VM with OMS integration</a></li>
<li><a href="/post/step-by-step-arm-templates-creating-asr-template-from-an-existing-azure-infrastructure-and-modifying-it/">Creating ASR Template from an existing Azure Infrastructure and Modifying It</a></li>
</ol>]]></description>
<link>https://HarvestingClouds.com/post/step-by-step-azure-resource-manager-arm-templates-index</link>
<pubDate>Fri, 21 Oct 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Step by Step ARM Templates - Providing PowerShell Scripts to Run after VM deployment via ARM Template</title>
<description><![CDATA[<p><strong>Index</strong> of all blogs in this <strong>Step by Step ARM Templates series</strong> is located here: <a href="/post/step-by-step-azure-resource-manager-arm-templates-index/">Step by Step Azure Resource Manager (ARM) Templates - Index</a></p>
<p>By providing PowerShell Scripts to Run after VM deployment via ARM Template, you can accomplish various activities. </p>
<ul>
<li>You can setup different features and roles on the VM. </li>
<li>You can setup web server. </li>
<li>You can setup SQL Database and configure it.</li>
<li>You can configure custom policies</li>
<li>And so on...</li>
</ul>
<p>You first need to have PowerShell script files uploaded to a storage account.
To do this you add an <strong>Extension resource</strong> (<em>Microsoft.Compute/virtualMachines/extensions</em>) nested inside a VM. This extension resource should be of type &quot;<strong>CustomScriptExtension</strong>&quot;. You provide the URLs to the PowerShell scripts inside this custom script extension.</p>
<h3>Preparation</h3>
<p>As part of the preparation process you need to:</p>
<ul>
<li>Ensure that the PowerShell scripts are uploaded to the Storage Account and that you have the complete URL to the blob. </li>
<li>Or you can upload the scripts to the GitHub and get the Raw file URL</li>
<li>If there are more than one scripts then there should be one master script amongst all ps1 files which will internally invoke other files. This master file will be triggered via the template. Information of all file URLs will also be provided via the Template</li>
</ul>
<h3>Providing and configuring Scripts to Run After VM Deployment</h3>
<p>Define the below resource to provide PowerShell scripts to be run after VM deployment:</p>
<pre><code>{
   "type": "Microsoft.Compute/virtualMachines/extensions",
   "name": "MyCustomScriptExtension",
   "apiVersion": "2015-05-01-preview",
   "location": "[parameters('location')]",
   "dependsOn": [
       "[concat('Microsoft.Compute/virtualMachines/',parameters('vmName'))]"
   ],
   "properties": {
       "publisher": "Microsoft.Compute",
       "type": "CustomScriptExtension",
       "typeHandlerVersion": "1.7",
       "autoUpgradeMinorVersion":true,
       "settings": {
           "fileUris": [
           "http://Yourstorageaccount.blob.core.windows.net/customscriptfiles/start.ps1",
           "http://Yourstorageaccount.blob.core.windows.net/customscriptfiles/secondaryScript.ps1",

       ],
       "commandToExecute": "powershell.exe -ExecutionPolicy Unrestricted -File start.ps1"
     }
   }
 }</code></pre>
<p><strong>How it works:</strong></p>
<ul>
<li>Both the files i.e. start.ps1 and secondaryScript.ps1 are picked up from the storage account after VM deployment. Ensure to replace the URLs with your actual storage account blob URLs. You can add more files if needed.</li>
<li>The &quot;start.ps1&quot; is the main powerShell script which should be invoking the secondaryScript.ps1 internally</li>
<li>CommandToExecutre property is used to invoke the start.ps1 powerShell script on the deployed VM</li>
</ul>
<h3>Passing Parameters to the PowerShell Script dynamically</h3>
<p>To pass the parameters to the PowerShell script use commandToExecute property. </p>
<p>One such example to pass the parameters is shown below:</p>
<pre><code>"commandToExecute": "[concat('powershell.exe -ExecutionPolicy Unrestricted -File start.ps1', ' -domainName ', parameters('domainNameParameter')]"</code></pre>
<p>Note the use of &quot;concat&quot; helper function to create the value of the &quot;commandToExecute&quot;. Also note that there is starting and trailing space in the second argument of the concat i.e. &quot; -domainName &quot;.</p>
<p>The parameter &quot;domainNameParameter&quot; should already be defined in the template in the parameters section. If the value of parameter &quot;domainNameParameter&quot; is &quot;testdomain.com&quot; then the dynamically generated command will become:</p>
<pre><code>powershell.exe -ExecutionPolicy Unrestricted -File start.ps1 -domainName testdomain.com</code></pre>
<h3>Securing the Access to the PowerShell Script File in Storage account</h3>
<p>Let us assume you want to deploy Windows VM with Protected settings. Then use the below sample to provide the PowerShell files.</p>
<pre><code>{
    "publisher": "Microsoft.Compute",
    "type": "CustomScriptExtension",
    "typeHandlerVersion": "1.7",
    "settings": {
        "fileUris": [
            "http: //Yourstorageaccount.blob.core.windows.net/customscriptfiles/start.ps1"
        ]
    },
    "protectedSettings": {
        "commandToExecute": "powershell.exe -ExecutionPolicy Unrestricted -start.ps1",
        "storageAccountName": "yourStorageAccountName",
        "storageAccountKey": "yourStorageAccountKey"
    }
}</code></pre>
<p>Note the use of &quot;protectedSettings&quot; above. This time you also specify the Storage Account Name and the Storage Account Key.</p>
<p>You can also refer the official documentation here: <a href="https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-windows-extensions-customscript/">Windows VM Custom Script extensions with Azure Resource Manager templates</a>.</p>]]></description>
<link>https://HarvestingClouds.com/post/step-by-step-arm-templates-providing-powershell-scripts-to-run-after-vm-deployment-via-arm-template</link>
<pubDate>Wed, 19 Oct 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Step by Step ARM Templates - Using Key Vault to Securely Provide Information in ARM Templates</title>
<description><![CDATA[<p><strong>Index</strong> of all blogs in this <strong>Step by Step ARM Templates series</strong> is located here: <a href="/post/step-by-step-azure-resource-manager-arm-templates-index/">Step by Step Azure Resource Manager (ARM) Templates - Index</a></p>
<p>When providing passwords and other secure and confidential information in ARM Templates, you need to ensure that you don't hard code these values anywhere. You don't need to compromise the security of the system while trying to automate deployments. Your end goal is to try to automate as much as possible and reduce manual involvement. </p>
<p><strong>Key Vaults</strong> are there to solve this problem without compromising any security. In fact, they make the whole solution more secure with least manual intervention.</p>
<h3>Setting up the Key Vault</h3>
<p>We first need to setup the Key Vault in Azure to be able to use it via ARM Template parameters.</p>
<ol>
    <li>
<b>Create a Key Vault in Azure</b> by going to <i>New -> Security + Identity -> Key Vault</i>. Provide a name, subscription, resource group etc. and provision the Key Vault. Once it is created navigate to it by clicking on "More Services" and searching for Key Vault. Click on the name of the vault you created. E.g. In this example we have named the key vault to "TestKeyVault101".

<br />
<b>Note</b> that this feature is in Preview  at the time of writing of this blog.

    </li>

    <li>
        Next, we need to <b>Add a Secret</b> in the key vault. Click on the Secrets and then the + Add button at the top, as shown below:
<br /><br />
<img alt="Adding Secret" src="/images/1476818644580676d447b16.png" />
<br /><br />
Next, in the "Create a secret" blade, set the Upload Options to Manual. Provide a name and value to the secret. Value is the password you want to securely save.
Ensure that the Enabled is set to Yes. Optionally you can set the activation and expiration dates. In this example, we are setting the Secret Name to "DefaultAdminPasswordSecret".
<br /><br />
<img alt="Creating Secret" src="/images/1476818650580676da89fee.png" />
    </li>

    <li>
        Next, we will set the <b>Access Policies </b> to provide access to the user under the context of which the template will be deployed. This is the user which will be accessing the Key Vault. Go to Key Vault settings and select Access Policies. Add the new user as shown below:
<br /><br />
<img alt="Access Policies" src="/images/14768190995806789bad0c4.png" />
<br /><br />

    </li>

    <li>
        Next, we will set the <b>Advanced Access Policies </b> to indicate that this key vault can be accessed via ARM Templates. Go to Key Vault settings and select Advanced Access Policies. Ensure that the checkbox for "<i>Enable access to Azure Resource Manager for template deployment</i>" is checked as shown below:
<br /><br />
<img alt="Access Policies" src="/images/1476819105580678a1cfc9f.png" />
<br /> <br />

    </li>

</ol>
<p>We are now all set with our Key Vault. Next, we will be using the secret we created to set the local Administrator user's password.</p>
<h3>Using the Key Vault Secret in ARM Template</h3>
<p>Let us assume that you have a JSON ARM Template which deploys a VM. One of the parameters in this template is AdminPassword. You want to use the Key Vault Secret to provide the value for this parameter. </p>
<p><strong>First</strong>, ensure that the parameter is declared as <i>securestring</i> as shown below:</p>
<pre><code>"adminPassword": {
    "type": "securestring",
    "metadata": {
        "description": "Password for local admin account."
    }
}</code></pre>
<p><strong>Next</strong>, we need to use the parameters file for this template. If you don't have one already create a new one. We can provide the reference to the Key Vault Secret as the value of admin user's password parameter in this file. General Syntax of providing reference is as shown follow:</p>
<pre><code>"adminPassword": {
  "reference": {
    "keyVault": {
      "id": "Key Vault Id Here"
    },
    "secretName": "Name of the secret in Azure Key Vault"
  }
}</code></pre>
<p>Now the ID in the above Syntax can be provided as:</p>
<p><em>/subscriptions/{guid}/resourceGroups/{group-name}/providers/Microsoft.KeyVault/vaults/{vault-name}</em>. </p>
<p>Note to replace the <em>{guid}</em> with actual GUID for the subscription (without the curly braces), replace <em>{group-name}</em> with the actual name of the resource group and <em>{vault-name}</em> with the actual name of the Key Vault.</p>
<p>You can also find the Resource ID for the Key Vault by navigating to it in the Azure Portal and then checking it's properties as shown below:</p>
<p><img src="/images/147682308858068830f2f8e.png" alt="Key Vault Resource ID" /></p>
<p>The complete parameter file looks like below:</p>
<pre><code>{
  "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "OtherParameter": {
      "value": "otherValue"
    },
    "adminPassword": {
      "reference": {
        "keyVault": {
          "id": "/subscriptions/11111aaa-1a11-1a11-a1aa-1a1111a111a1/resourceGroups/TestRG101/providers/Microsoft.KeyVault/vaults/TestKeyVault101"
        },
        "secretName": "DefaultAdminPasswordSecret"
      }
    }
  }
}</code></pre>
<p>Next, deploy the template using PowerShell and pass this parameters file as explained here: <a href="/post/step-by-step-arm-templates-deploying-template-using-azure-powershell/">Deploying Template Using Azure PowerShell</a>. </p>
<p>Example PowerShell cmdlet to deploy will look like:</p>
<pre><code>New-AzureRmResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName TestResourceGroup01 -TemplateFile .\TemplateFile.json -TemplateParameterFile .\ParametersFile.json</code></pre>
<p>Now that you know how to use values from Key Vaults, you can make the automated deployment of resources more secure in your environment.</p>]]></description>
<link>https://HarvestingClouds.com/post/step-by-step-arm-templates-using-key-vault-to-securely-provide-information-in-arm-templates</link>
<pubDate>Tue, 18 Oct 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Step by Step ARM Templates - Visualizing ARM Templates and Generating Diagrams</title>
<description><![CDATA[<p><strong>Index</strong> of all blogs in this <strong>Step by Step ARM Templates series</strong> is located here: <a href="/post/step-by-step-azure-resource-manager-arm-templates-index/">Step by Step Azure Resource Manager (ARM) Templates - Index</a></p>
<p>When developing ARM Templates, from time to time you will need to:</p>
<ul>
<li>Visualize your ARM Templates</li>
<li>Generate Diagrams for your ARM Templates</li>
</ul>
<p>Microsoft has provided an Open Source tool for this named &quot;ARMVIZ&quot; (short for ARM Visualizer). This tool can be accessed by navigating to the below URL:</p>
<p><a href="http://armviz.io/"><a href="http://armviz.io/">http://armviz.io/</a></a></p>
<h3>Navigating ARMVIZ</h3>
<p>ARMVIZ is a nice in-browser application to visualize all the components in a template. It also shows the dependencies between various components. Using this web application you can:</p>
<ul>
<li>Either visualize your own developed template,</li>
<li>Or inspect existing templates on GitHub</li>
</ul>
<p>Let's take a quick tour of the interface:</p>
<p><img src="/images/14768067785806487ad77dd.png" alt="ARMVIZ Interface" /></p>
<p>I have numbered various elements of the interface in the above diagram. Let's quickly review these elements:</p>
<ol>

    <li>
              <b>Designer</b> - This is represented by an "eye" icon on the left bar. This should be selected by default. If you are in the editor mode then you can click this and the diagram will be shown in the middle portion of the screen.
    </li>

    <li>
              <b>Editor</b> - This is represented by "</>" text for code on the left bar. Clicking on this will take you to the editor portion of the ARMVIZ tool. In this area, you can edit your template while still in the tool. You can add or remove components. You can even edit the components or add dependencies.
    </li>

     <li>
              <b>Canvas area</b> - This is the main screen (the middle area) where the template is displayed.
    </li>

    <li>
              <b>File Menu</b> - This is the main and simple menu in the whole web application in the top bar. It has two options:
              <ol type="a">
                       <li>
                             <b>Open Local Template</b> - You can open an ARM Template JSON from your local computer to visualize using this menu option.
                       </li>
                       <li>
                             <b>Download Template</b> - You can download the current template by using this menu option.
                       </li>
              </ol>
    </li>

    <li>
              <b>Quickstart ARM Templates</b> - This is the link to external library of Quickstart ARM Templates on GitHub. These starter templates can help you save a lot of time. Instead of starting from scratch you can use these templates to fasten the ARM Templates Development.
    </li>

</ol>
<p>This is how the Editor portion of the tool looks like. Use this area to edit or update your template.
<strong>Note:</strong> If there will be mistakes, such as missing parenthesis in your template, the designer will not show any diagram. </p>
<p><img src="/images/147680732358064a9ba1310.png" alt="Editor Area" /></p>
<p>You can zoom into and zoom out of your template diagram by rolling the mouse wheel. You can also drag and reposition various elements.
Take a screenshot once you have repositioned the elements as per your requirements and have zoomed into an appropriate level.</p>
<p>Below screenshot is taken from a much more complex template.</p>
<p><img src="/images/1476806791580648877fed3.png" alt="Complex ARM Template Diagram" /></p>
<p>In conclusion, ARMVIZ can enable you to easily visualize your ARM Templates. It can empower you to generate diagrams for your documentation and to present to your team.</p>]]></description>
<link>https://HarvestingClouds.com/post/step-by-step-arm-templates-visualizing-arm-templates-and-generating-diagrams</link>
<pubDate>Mon, 17 Oct 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Step by Step ARM Templates - Deploying a Windows VM with OMS integration</title>
<description><![CDATA[<p><strong>Index</strong> of all blogs in this <strong>Step by Step ARM Templates series</strong> is located here: <a href="/post/step-by-step-azure-resource-manager-arm-templates-index/">Step by Step Azure Resource Manager (ARM) Templates - Index</a></p>
<p>You can deploy a Windows VM with OMS integration. You can have the OMS extension installed. And then you can onboard the VM to a specified workspace. </p>
<h3>Prerequisites</h3>
<p>You need already have an OMS workspace setup in your subscription. You need to have the following information about this OMS Workspace:</p>
<ol>
<li>OMS workspace ID</li>
<li>OMS workspace Key</li>
</ol>
<p>You may obtain the workspace ID and key by going to the Connected Sources tab in the Settings page in the OMS Portal or to the Direct Agent blade in the Azure portal.</p>
<p>In the Azure Portal go to the Log Analytics -&gt; Click on the OMS Workspace you want to use. Click on the &quot;OMS Portal&quot; to navigate to the OMS Portal.</p>
<p><img src="/images/14768464545806e3762d97c.png" alt="Link to OMS Portal" /></p>
<p>In the OMS portal, navigate to the Settings.</p>
<p><img src="/images/14768422705806d31e60125.png" alt="OMS Portal Settings" /></p>
<p>In Settings, go to the Connected Sources -&gt; Windows Servers. Note the Workspace ID and the Primary Key as shown below:</p>
<p><img src="/images/14768422755806d323b2caa.png" alt="OMS Portal ID and Key" /></p>
<h3>ARM Template Sections for OMS integration</h3>
<p>Within the VM resource, you need to define the OMS extension as shown below:</p>
<pre><code>  "resources": [
    {
      "type": "extensions",
      "name": "Microsoft.EnterpriseCloud.Monitoring",
      "apiVersion": "[variables('apiVersion')]",
      "location": "[resourceGroup().location]",
      "dependsOn": [
        "[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]"
      ],
      "properties": {
        "publisher": "Microsoft.EnterpriseCloud.Monitoring",
        "type": "MicrosoftMonitoringAgent",
        "typeHandlerVersion": "1.0",
        "autoUpgradeMinorVersion": true,
        "settings": {
          "workspaceId": "Your Workspace ID Here"
        },
        "protectedSettings": {
          "workspaceKey": "Your Workspace Key Here"
        }
      }
    }
  ]</code></pre>
<p>The above configures the OMS on the VM. Note that you need the nested extension resource of type &quot;Microsoft.EnterpriseCloud.Monitoring&quot;. </p>
<p>Also, note the Workspace Id and Key in the template section above. Enter the values as per your environment which we found in the Prerequisites section above. </p>
<h3>Providing the Workspace ID and Workspace Key Dynamically</h3>
<p>You can also provide the Workspace Id and the Workspace Key dynamically by only using the OMS Workspace name. Follow the below sample. Note the use of reference, listKeys, and resourceId helper functions.</p>
<pre><code>"settings": {
          "workspaceId": "[reference(resourceId('Microsoft.OperationalInsights/workspaces/', parameters('workspaceName')), '2015-03-20').customerId]"
        },
        "protectedSettings": {
          "workspaceKey": "[listKeys(resourceId('Microsoft.OperationalInsights/workspaces/', parameters('workspaceName')), '2015-03-20').primarySharedKey]"
        }</code></pre>
<p><strong>Reference:</strong> You can check the complete quick starter template for OMS integration here: <a href="https://github.com/Azure/azure-quickstart-templates/tree/master/201-oms-extension-windows-vm">GitHub Sample - Deployment of a Windows VM with OMS Extension</a></p>]]></description>
<link>https://HarvestingClouds.com/post/step-by-step-arm-templates-deploying-a-windows-vm-with-oms-integration</link>
<pubDate>Sun, 16 Oct 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Automation Preview Solution - Start/Stop VMs during off-hours </title>
<description><![CDATA[<p>Starting and Stopping VMs during off-hours can mean lots of cost optimizations for you. We have been implementing this via custom Runbooks and schedules for various customers. Now there is out of the box support for this within Azure. The feature is currently in Preview but you can build on this.</p>
<h3>What do I need - Prerequisites</h3>
<p>Before beginning check that your region has this feature available. Just like with any other automation solution, you will need to have:</p>
<ol>
<li>OMS Workspace (or you can create new while adding the solution)</li>
<li>Automation Account (or you can create new)</li>
<li>Azure Run As account (and not the Microsoft Account)</li>
<li>For email support, Office 365 business-class subscription is required</li>
</ol>
<p>Note: The VMs that you want to manage should be in the same subscription and resource group as where the Automation account resides.</p>
<h3>How to Add</h3>
<p>To Add the solution, click on &quot;+ New&quot; symbol and search for &quot;Start/Stop VMs during off-hours&quot;. You will find the below solution available to be created:</p>
<p><img src="/images/1476808753580650317ab68.png" alt="Start and Stop VMs Preview Solution" /></p>
<h3>What does it Contain</h3>
<p>The solution is a combination of various automation assets:</p>
<ol>
<li>Runbooks</li>
<li>Variables</li>
<li>Schedules</li>
<li>Credentials</li>
</ol>
<p>You can change some configurations during and some after the deployment. </p>
<p>Find out more here: <a href="https://azure.microsoft.com/en-us/documentation/articles/automation-solution-vm-management/">Start/Stop VMs during off-hours [Preview]</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-automation-preview-solution-start-stop-vms-during-off-hours</link>
<pubDate>Tue, 11 Oct 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure RemoteApp is going away. New Purchases in portal are now stopped </title>
<description><![CDATA[<p>Today I got an email saying &quot;Action recommended: Deleting unnecessary RemoteApp collections can save you money&quot;. This email is also a reminder from Microsoft that the RemoteApp is going away. We need to plan for the migration for the existing RemoteApps to other platforms.</p>
<h3>What this means</h3>
<p>This means various things to you. Most prominently:</p>
<ul>
<li>Over the next year, the support for Remote App is going away</li>
<li>You need to plan and migrate the existing RemoteApp application to other platforms</li>
<li>New Purchases in the Azure portal for RemoteApp are no longer available</li>
</ul>
<h3>When is the service coming to Stop</h3>
<p>The service will have support through <strong>August 31st, 2017</strong>. That's when this service will come to a stop. New service purchase was stopped effective October 1st, 2016.</p>
<h3>What are my Options</h3>
<p>You have various options for migration. The option being <strong>recommended</strong> by Microsoft is using &quot;<strong>Citrix XenApp express</strong>&quot;. In fact, Microsoft is partnering up with Citrix on this. This service is not yet available and is currently under development. As this will be the native option in Azure this will be your best bet once it is announced. You can learn more about this solution here on Citrix site: <a href="https://www.citrix.com/global-partners/microsoft/remote-app.html">Citrix and Microsoft</a></p>
<p>The second option is to use Remote Desktop Services (RDS) deployed on Azure IaaS. This means to set up the infrastructure yourself and then deploy and host the RDS solution on that infrastructure in Azure. You can know more about the steps here: <a href="https://technet.microsoft.com/en-us/windows-server-docs/compute/remote-desktop-services/host-desktops-and-apps-in-remote-desktop-services">Host desktops and apps in Remote Desktop Services on Azure</a></p>
<p>Another option is to use  hosted solutions from various 3rd party vendors. You can find such solution from various partners from the Azure marketplace. You can also read and get to know the complete list of these hosting partners here: <a href="https://technet.microsoft.com/en-us/windows-server-docs/compute/remote-desktop-services/rds-hosting-partners">RDS - Partners for hosting desktops and apps</a></p>
<p><strong>In conclusion,</strong> I will recommend to wait and look out for Citrix XenApp express solution. If you need new remote application solutions then you need to either deploy your own solution or use one of the hosted solutions by 3rd party vendors. You can read more about the official announcement here: <a href="https://blogs.technet.microsoft.com/enterprisemobility/2016/08/12/application-remoting-and-the-cloud/?WT.mc_id=azurebg_email_Trans_1218_No_Usage_Azure_RemoteApp">Application Remoting and the Cloud</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-remoteapp-is-going-away-new-purchases-in-portal-are-now-stopped</link>
<pubDate>Mon, 10 Oct 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Step by Step ARM Templates - Iterating and creating multiple instances of a resource</title>
<description><![CDATA[<p><strong>Index</strong> of all blogs in this <strong>Step by Step ARM Templates series</strong> is located here: <a href="/post/step-by-step-azure-resource-manager-arm-templates-index/">Step by Step Azure Resource Manager (ARM) Templates - Index</a></p>
<p>In Azure Resource Manager (ARM) templates, you can define the variable once and then iterate/loop over that definition and create multiple instances of that resource.
There are 3 special constructs in ARM templates to help you with this. </p>
<p>These <strong>constructs</strong> are:</p>
<ul>
<li><strong>copy</strong> - This is a property that is defined within the resource. This is the construct which when defined indicates that this resource needs to be looped over and created multiple times. It also specifies the number of times to iterate via &quot;count&quot; property.</li>
<li><strong>copyIndex()</strong> - Used to access the current iteration value. Its value for the first iteration is <strong>zero</strong>. For the second iteration, its value is 1 and so on... You can pass it an integer (number) as a parameter. Whatever number you pass that will become the value for the first iteration and subsequent iterations. E.g. copyIndex(20) will compute to 20 in the first iteration, 21 in the second iteration and so on.</li>
<li><strong>length</strong> - This is the method of arrays. It computes the number of elements in an array. It can be used to set the &quot;count&quot; property of &quot;copy&quot; construct.</li>
</ul>
<p><strong>Note:</strong> Arrays are always <strong>zero indexed</strong>. What that means is that the first element of the array is indexed at 0, the second element of the array is indexed at 1, and so on...</p>
<h3>1. Simple Example</h3>
<p>Let us understand these constructs using an example.</p>
<pre><code>"parameters": { 
  "count": { 
    "type": "int", 
    "defaultValue": 3 
  } 
}, 
"resources": [ 
  { 
      "name": "[concat('HarvestingClouds-', copyIndex(100))]", 
      "type": "Microsoft.Web/sites", 
      "location": "Central US", 
      "apiVersion": "2015-08-01",
      "copy": { 
         "name": "websitescopy", 
         "count": "[parameters('count')]" 
      }, 
      "properties": {
          "serverFarmId": "hostingPlanName"
      }
  } 
]</code></pre>
<p>The above example will <strong>result</strong> in creation of below 3 web apps in Azure:</p>
<ul>
<li>HarvestingClouds-100</li>
<li>HarvestingClouds-101</li>
<li>HarvestingClouds-102</li>
</ul>
<p>Note the usage of &quot;copy&quot; property in the above code example:</p>
<pre><code> "copy": { 
             "name": "websitescopy", 
             "count": "[parameters('count')]" 
          }</code></pre>
<p>As you can notice above, the value of this property is another JSON object. This object has further two properties: </p>
<ul>
<li>First is the name property, which provides the name to the looping construct. This can be any meaningful name. </li>
<li>The second property is the count, which specifies how many times this resource definition should be deployed. Note that the value is set to the parameter named &quot;count&quot;. The name of the parameter can be anything but the value of the parameter has to be a number (i.e. an integer).</li>
</ul>
<p>Next, note how the name of the web application is constructed using the copyIndex() helper function.</p>
<pre><code>"name": "[concat('HarvestingClouds-', copyIndex(100))]"</code></pre>
<p>The above value uses two helper functions. First is the &quot;concat()&quot; which is concatenating (i.e. joining) two values. First value is the prefix string &quot;HarvestingClouds-&quot;. Second parameter and the second helper function is <code>copyIndex(100)</code>. This specifies the current iteration value, which is offset with 100. So for the first iteration, the value will be 0+100 = 100, for the second iteration the value will be 1+100 = 101 and so on...</p>
<h3>2. Example with an Array</h3>
<p>Let's assume that you want to deploy multiple web apps for different purposes. You need one web app for Production, one for Staging or testing and one for Development. You want to name the web apps deployed with the purpose concatenated.
The below example uses an array to set the values for the web app name:</p>
<pre><code>"parameters": { 
  "purpose": { 
     "type": "array", 
         "defaultValue": [ 
         "Production", 
         "Staging", 
         "Development" 
      ] 
  }
}, 
"resources": [ 
  { 
      "name": "[concat('HarvestingClouds-', parameters('purpose')[copyIndex()])]", 
      "type": "Microsoft.Web/sites", 
      "location": "Central US", 
      "apiVersion": "2015-08-01",
      "copy": { 
         "name": "websitescopy", 
         "count": "[length(parameters('purpose'))]" 
      }, 
      "properties": {
          "serverFarmId": "hostingPlanName"
      } 
  } 
]</code></pre>
<p>The <strong>output</strong> of the above sample will be 3 web apps deployed in Azure with following names:</p>
<ul>
<li>HarvestingClouds-Production</li>
<li>HarvestingClouds-Staging</li>
<li>HarvestingClouds-Development</li>
</ul>
<p>Note in the above code sample that the parameter &quot;purpose&quot; is an array with 3 values i.e. Production, Staging, and Development. Then in the &quot;copy&quot; construct the count property is set using the length of this array as shown below. As there are 3 elements in the array, the value of count will be 3 and the resource will be deployed 3 times.</p>
<pre><code>"count": "[length(parameters('purpose'))]" </code></pre>
<p>Next, the name of the web app is set using the copyIndex() and the array itself as shown below:</p>
<pre><code>"name": "[concat('HarvestingClouds-', parameters('purpose')[copyIndex()])]"</code></pre>
<p>As earlier, it uses concat helper function to add two strings. The first string is simple text i.e. &quot;HarvestingClouds-&quot;, which becomes the prefix for the web app name. Second is finding out the value of the array based on the current iteration. For the first iteration, copyIndex() will compute to zero, therefore the second parameter becomes <code>parameters('purpose')[0]</code>. This will fetch the 0th element of the array which is Production. Similarly, for the second iteration, copyIndex() will compute to 1, therefore the second parameter becomes <code>parameters('purpose')[1]</code>. This will fetch the second element of the array (or element at index value 1) which is Staging, and so on...</p>
<h3>3. Depending upon resources being deployed by the copy Loop</h3>
<p>Let's assume you want to deploy a storage account. But you want to deploy it only after all the web apps are deployed by the loop. In this scenario, the dependsOn property of a resource is set to the name of the &quot;copy&quot; property of the resource, rather than the resource itself.</p>
<pre><code>    {
        "apiVersion": "2015-06-15",
        "type": "Microsoft.Storage/storageAccounts",
        "name": "teststorage101",
        "location": "[resourceGroup().location]",
        "properties": {
            "accountType": "Standard_LRS"
         }
       "dependsOn": ["websitescopy"]
    }</code></pre>
<p>Note above that the dependsOn property is set to the name property of the copy in the earlier web app example. This storage account will not be deployed until all 3 web apps are not deployed.</p>
<h3>4. Limitations</h3>
<p>There are two limitations on the use of the copy to iterate and create multiple resource instances:</p>
<ol>
<li><strong>Nested Resources</strong> - You cannot use a copy loop for a nested resource. If you need to create multiple instances of a resource that you typically define as nested within another resource, you must instead create the resource as a top-level resource and define the relationship with the parent resource through the <strong>type</strong> and <strong>name</strong> properties.</li>
<li><strong>Looping Properties of a Resource</strong> - You can only use copy on resource types, not on properties within a resource type. E.g. Creating multiple data disks within a VM.</li>
</ol>
<p>That is all there is to iterate and creating multiple resources from a single definition. When your templates will start becoming complex then these constructs/helper functions will help you a lot. E.g. you may need to deploy multiple load balanced resources, then you can use the concepts defined in this post.</p>
<p>You can also refer the official documentation here: <a href="https://azure.microsoft.com/en-us/documentation/articles/resource-group-create-multiple/">copy, copyIndex, and length</a></p>]]></description>
<link>https://HarvestingClouds.com/post/step-by-step-arm-templates-iterating-and-creating-multiple-instances-of-a-resource</link>
<pubDate>Tue, 27 Sep 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Step by Step ARM Templates - Deploying ARM Templates using Visual Studio</title>
<description><![CDATA[<p><strong>Index</strong> of all blogs in this <strong>Step by Step ARM Templates series</strong> is located here: <a href="/post/step-by-step-azure-resource-manager-arm-templates-index/">Step by Step Azure Resource Manager (ARM) Templates - Index</a></p>
<p>In the last blog we saw <a href="/post/step-by-step-arm-templates-authoring-arm-templates-using-visual-studio/">How to use Visual Studio to author ARM templates</a>. In this blog we will see how to use Visual Studio (VS) to deploy the template without leaving VS.</p>
<p>Deploying with Visual Studio is very simple, straightforward and very intuitive. Just follow the below steps.</p>
<ol>

    <li>
Either go to the Solution Explorer -> Right Click on the project and select "Deploy -> New Deployment" as shown below:<br />
<img src="/images/147578624957f6b6090d5eb.png" alt="New Deployment - Solution Explorer" /><br />

Or you can go to the menu option Project -> Deploy -> New Deployment as shown below:<br />
<img src="/images/147578624157f6b601b647b.png" alt="New Deployment - Project Menu" />

<br />
Once you click on the "New Deployment", you will be presented with the below Dialog for the deployment.<br />
<img src="/images/147578742957f6baa5bfb1c.png" alt="New Deployment Dialog" /><br />

If you are not logged in then it will ask you to log into your Azure account. <br />

    </li>

    <li>
In the Dialog for "Deploy to Resource Group" select the Subscription by clicking on the first drop down.<br />
    </li>

    <li>
Next click on the drop down for the Resource Group. You can either select an existing Resource Group or you can click on "<Create New...>" option to create a new resource group for the current deployment.
<br />
<img src="/images/147578768757f6bba7dd5ec.png" alt="Resource Group creation" />
<br />
If you click on "<Create New...>" option to create a new Resource Group then you will be presented with an additional popup.
<br />
<img src="/images/147578794457f6bca893e66.png" alt="Resource Group creation additional Popup" /><br />
In this additional popup, type the name for your new resoruce group and the location in Azure where this should be created. Click "Create" once done in the additional popup. <br /><br />
    </li>

    <li>
Next, we are going to provide the value for the parameters. Go ahead and click on the "Edit Parameters..." link in the "Deploy to Resource Group" dialog. This will open another popup to provide the parameters. <br />
Button to edit parameters is shown below:<br />
<img src="/images/147578820657f6bdae623f9.png" alt="Edit Parameters" /> <br />

Additional dialog to provide parameters is shown below: <br />
<img src="/images/147578821157f6bdb3ec989.png" alt="Providing Parameters" /> <br />

Note the following points in the parameters:
<ol type="a">
<li>Corresponding to the string parameters, a text box is provided.</li>
<li>For the secure string parameters like password, a secure password text box is provided.</li>
<li>Corresponding to the parameters for which you have defined the "Allowed Values" in your template, a combo box (or drop Down) is provided with the "default Value" selected by default.</li>
</ol>
 Click Ok once done
   </li>

    <li>
Next, click on the Deploy button to deploy the template to Azure.
    </li>

    <li>
You can check the results in the <b>Outputs</b> window in the Visual Studio. Along with time stamp, it will show you what steps Visual Studio took to perform the deployment. It uses the values of parameters you provided and uses the PowerShell script to deploy the resources. You will notice the PowerShell window opening and prompt for the Admin Password. 
<br />
<b>Note 1: </b>The PowerShell window may not come above as active window. Just search and click on the window in your Taskbar. <br />
Provide the password and hit Enter as shown below:
<img src="/images/147578930357f6c1f7c0b65.png" alt="PowerShell window" /> <br />
<br />
<b>Note 2: </b>It may take some time to complete the deployment after that. Wait and do not close the PowerShell window. It should automatically close once done.
<br />
<b>Note 3: </b>Once the deployment completes the last line in Output window in Visual Studio will be: "Successfully deployed template..." as shown below:
<br />
<img src="/images/147578931057f6c1fecaa6a.png" alt="Success - Output Window" /> <br />

    </li>

</ol>
<p>This is it! Navigate to the Azure portal and validate the deployed resources in your selected resource group.</p>]]></description>
<link>https://HarvestingClouds.com/post/step-by-step-arm-templates-deploying-arm-templates-using-visual-studio</link>
<pubDate>Tue, 20 Sep 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Step by Step ARM Templates - Authoring ARM Templates using Visual Studio</title>
<description><![CDATA[<p><strong>Index</strong> of all blogs in this <strong>Step by Step ARM Templates series</strong> is located here: <a href="/post/step-by-step-azure-resource-manager-arm-templates-index/">Step by Step Azure Resource Manager (ARM) Templates - Index</a></p>
<p><strong>Visual Studio</strong> is a very powerful tool when it comes to authoring ARM Templates. </p>
<p><strong>Key features</strong> which make it a tool of our choice are:</p>
<ul>
<li>In-house support for ARM Templates</li>
<li>Smart IntelliSense</li>
<li>Pre-populated templates for various Azure resources</li>
<li>JSON Outlining</li>
<li>Easy Deployment options</li>
</ul>
<p>The screenshots in this blog post are from Visual Studio 2013. You can use other newer versions as well.</p>
<h3>Pre-Requisites</h3>
<p>You need to have Azure SDK installed to get true power of Visual Studio with Azure integration. If you don't have it already, you can install the same from here: <a href="https://azure.microsoft.com/en-us/downloads/">Azure SDK Downloads</a></p>
<h3>Authoring First ARM Template in Visual Studio</h3>
<p>Authoring with Visual Studio is very easy. </p>
<ol type="1">
<li>To get started just launch the Visual Studio from the Start menu.</li>
<li>Next, Create a new Project of type "Azure Resource Group" by navigating to Templates -> Visual C# -> Cloud <br /><br />

<img src="/images/147576108957f653c1768fc.png" alt="New Project" />
<br />

</li>

<li>
    Next, you will be presented with a dialog to "Select Azure Template". If you want to author from scratch then choose a Blank Template. Else select one of the starter template. For this blog, we will be using "Windows Virtual Machine" Template.<br /><br />
    <img src="/images/147576828457f66fdc0f0e6.png" alt="Selecting Azure Template" />
    <br />
</li>

<li>
    Project is created with various folders and files. You can explore the project in the solution explorer in Visual Studio.<br /><br />
    <img src="/images/147576986357f6760758619.png" alt="Solution Explorer" />
    <br />

Let us see what these folders and files are:
        <ol type="a">
            <li><b>Scripts</b>: The single PS1 file is to create a new Resource Group and deploy the ARM Template. It uses "New-AzureRmResourceGroupDeployment" PowerShell cmdlet to deploy the template. </li>
            <li><b>Templates</b>: "<i>WindowsVirtualMachine.json</i>" is the main ARM Template file that we are interested in. Also, "<i>WindowsVirtualMachine.parameters.json</i>" is the parameters file for the ARM template.</li>
            <li><b>Tools</b>: This folder contains the "AzCopy.exe" file to help you copy any artifacts to Azure.</li>
        </ol>
</li>

<li>
    Double click and open the "<i>WindowsVirtualMachine.json</i>" file to open it. You will be presented with a huge JSON file. Collapse the section by clicking the small "-" signs to the left of the file. Also notice the <b>JSON Outline</b> panel to the left. This is your biggest friend in Visual Studio when authoring ARM Templates.<br /><br />
    <img src="/images/147577487757f6899d297b5.png" alt="" />
    <br />
You can immediately notice that key sections both in the template in the middle and in the JSON Oultine panel on the left (in the image above) are:
               <ol type="a">
                   <li>parameters</li>
                   <li>variables</li>
                   <li>resources</li>
               </ol>
You can click on any of the elements in the left JSON Outline panel and the same section will be highlighted in the center, in the JSON template file.
    <br />

</li>

<li>
    Next, let us look at JSON Outline panel and check how it can provide us more information and help us in authoring templates.<br /><br />
    <img src="/images/147578115457f6a2226c2be.png" alt="JSON Outline Panel" />
    <br />
    You can see that the panel provides a special icon for each type of the resource. In our current template the various resources listed are:
    <ol type="a">
        <li>StorageAccount</li>
        <li>PublicIPAddress</li>
        <li>VirtualNetwork</li>
        <li>NetworkInterface</li>
        <li>VirtualMachine</li>
    </ol>

Click on each of the resources and inspect how their JSON structure looks and differs. You will immediately notice that the major difference in each of these resources is in their <b>Type</b> and <b>Properties</b>.

<h3>Adding New Resource</h3> 
Let's assume you want to add a new resource to this template. You have 2 ways to achieve the same:
    <ol>
    <li><b>Method 1</b> - Create a new resource by modifying and adding the JSON for the new resource in the template.</li>
    <li><b>Method 2</b> - Let Visual Studio add the resource for you. Right click anywhere in the resources area of the JSON Outline Panel or the small "+" box at the top left of the panel (as shown in the image above) and VS will give you a new popup to add the resource from pre-defined resources as shown below.</li>
    </ol>
    <img src="/images/147578170157f6a4457bae0.png" alt="Add Resource" />
<br />
Once you click Add the JSON for resource will be added to the template and the corresponding new element will appear in the JSON Outline Panel.
<br />

<h3>Deleting a Resource</h3> <br />
If you need to delete a resource, simply right click on that resource in the JSON outline panel on the left and then select "Delete Resource".
<br />

<h3>Using Intellisense</h3>     
The last thing to notice is the use of <b>Intellisense</b> in Visual Studio which helps you as you are editing the templates.<br /><br />
    <img src="/images/147578465457f6afce4debb.png" alt="Intellisense" />
    <br />
     When you type quotes the closing quotes are automatically provided. Also, as you can see in the above image the various valid values, that can come there are also shown along with small tooltip about the data type. If the Intellisense doesn't come up automatically, then press Ctrl + Space to get Intellisense.
</li>

</ol>
<p>In the end, the Visual Studio makes authoring ARM templates much more manageable and easy for you.</p>
<p>In the next blog, we will see how to use Visual Studio to Deploy the templates.</p>]]></description>
<link>https://HarvestingClouds.com/post/step-by-step-arm-templates-authoring-arm-templates-using-visual-studio</link>
<pubDate>Sat, 17 Sep 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Step by Step ARM Templates - Creating Parameters file for an ARM Template</title>
<description><![CDATA[<p><strong>Index</strong> of all blogs in this <strong>Step by Step ARM Templates series</strong> is located here: <a href="/post/step-by-step-azure-resource-manager-arm-templates-index/">Step by Step Azure Resource Manager (ARM) Templates - Index</a></p>
<p>You can pass the input values for the Parameters in your ARM template using an additional JSON file. This additional file is what we will be referring to as <strong>Parameters File</strong>.</p>
<p>The only restriction on a parameters file is that the size of the parameter file cannot be more than 64 KB.</p>
<p>Parameters file follows a similar structure to the ARM Template. They are very simple as compared to the ARM template. In all they have 3 sections as explained below:</p>
<ol>
<li><strong>$schema</strong> - Required Object - Location of the JSON schema file that describes the version of the template language.</li>
<li><strong>contentVersion</strong> - Required Object - Version of the template (such as 1.2.0.20). When deploying resources using the template, this value can be used to make sure that the right template is being used.</li>
<li><strong>parameters</strong> - Required Object - This is a JSON object which contains various objects as it's members. Each object within the &quot;parameters&quot; object represent a value for a parameter corresponding to your ARM template.</li>
</ol>
<p>Let's check how the parameters file will look like for the ARM template we have built earlier for deploying Storage Account and a Virtual Network.</p>
<pre><code>{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "vhdStorageName": {
            "value": "harvestingstorage101"
        },
        "virtualNetworkName": {
            "value": "testvNet101"
        }
    }
}</code></pre>
<p>Note that the only 2 parameter values are provided. These correspond to the parameters in the ARM template. </p>
<p><strong>Note:</strong> The parameter names should match to the parameters defined in the ARM template.</p>]]></description>
<link>https://HarvestingClouds.com/post/step-by-step-arm-templates-creating-parameters-file-for-an-arm-template</link>
<pubDate>Wed, 14 Sep 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Step by Step ARM Templates - Deploying Template Using Azure PowerShell</title>
<description><![CDATA[<p><strong>Index</strong> of all blogs in this <strong>Step by Step ARM Templates series</strong> is located here: <a href="/post/step-by-step-azure-resource-manager-arm-templates-index/">Step by Step Azure Resource Manager (ARM) Templates - Index</a></p>
<p>In the previous blog you learned <a href="/post/step-by-step-arm-templates-building-your-first-arm-template/">How to build your first ARM Template</a> . Now that you have a fully functional ARM template we want to deploy this template to Azure.</p>
<p>There are various options to deploy a template to Azure. We already saw in the last blog <a href="/post/step-by-step-arm-templates-deploying-template-using-azure-portal/">How to deploy template using <strong>Azure Portal</strong></a>. Now we will look at <strong>Azure PowerShell</strong> as more programmatic and automated way to deploy the template.</p>
<h2>Pre-requisites</h2>
<p>Things you should know before deployment</p>
<ol>
<li><strong>Azure PowerShell</strong> - This should be installed on the machine from where the Steps will be followed. If you don't have this then use this link to get it: <a href="https://azure.microsoft.com/en-us/documentation/articles/powershell-install-configure/">Get Azure PowerShell</a></li>
<li><strong>Azure Subscription</strong> - where you want to deploy your template</li>
<li><strong>Resource Group</strong> - This is the resource group in Azure where you will be deploying your template. You can create a new resource group (for the resources that will be deployed by the template) or use an existing one.</li>
<li><strong>Parameters</strong> - Value of the input parameters to the template should be known to you for the deployment. Follow all your naming conventions when defining the parameters for deployments of resources in Azure. </li>
<li><strong>Internet Connectivity</strong> - This should be present on the machine from where the Steps will be followed for connectivity to Azure</li>
</ol>
<h2>Steps for Deployment</h2>
<ul>
<li>First, launch a PowerShell window as an Administrator</li>
<li>Then, log into the Azure account. </li>
</ul>
<p>Run the below cmdlet to log into Azure:</p>
<pre><code>Add-AzureRmAccount</code></pre>
<ul>
<li>Select appropriate Azure Subscription</li>
</ul>
<p>You have two choices here. You can either use below cmdlet to use Subscription ID</p>
<pre><code>Set-AzureRmContext -SubscriptionID &lt;YourSubscriptionId&gt;</code></pre>
<p>Or you can use the Subscription name with the below cmdlet:</p>
<pre><code>Select-AzureRmSubscription -SubscriptionName "&lt;Your Subscription Name&gt;"</code></pre>
<ul>
<li>Next, if you already have a resource group to which you want to deploy the template then skip this step. Else create a new resource group. A resource in Azure ARM architecture can only exist in a resource group. </li>
</ul>
<p>Use below cmdlet to create a new Resource Group:</p>
<pre><code>New-AzureRmResourceGroup -Name TestResourceGroup01 -Location "Central US"</code></pre>
<ul>
<li>Before deploying the Resource Template to Azure, you should Test it. This step is optional but highly recommended.</li>
</ul>
<p>Use the below cmdlet to test and validate your template:</p>
<pre><code>Test-AzureRmResourceGroupDeployment -ResourceGroupName TestResourceGroup01 -TemplateFile &lt;PathToJsonTemplate&gt;</code></pre>
<ul>
<li>Now comes the last step i.e. to deploy the template. You have two options when deploying the template. You can either deploy a template without any parameters (if none are required) or you need to specify the parameters. Let's check both these options next.</li>
</ul>
<h3>Deploying Template which doesn't need Parameters</h3>
<p>You can deploy such template using <code>New-AzureRmResourceGroupDeployment</code> cmdlet.
If the template file is on a local directory then use the below cmdlet:</p>
<pre><code>New-AzureRmResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName TestResourceGroup01 -TemplateFile &lt;PathToTemplate&gt;</code></pre>
<p>If the template file is uploaded to some hosted location and is accessible via a link, then use the below cmdlet to deploy the template:</p>
<pre><code>New-AzureRmResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName TestResourceGroup01 -TemplateUri &lt;LinkToTemplate&gt;</code></pre>
<h3>Deploying Template with Parameters</h3>
<p>Deploying of the template is exactly similar as the previous section. You use the same cmdlet. To specify the parameter, you have 4 options. Use the below cmdlets for the option you want to use.</p>
<p><strong>Option 1</strong> - Using Inline Parameter</p>
<pre><code>New-AzureRmResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName TestResourceGroup01 -TemplateFile &lt;PathToTemplate&gt; -myParameterName "parameterValue" -secondParameterName "secondParameterValue"</code></pre>
<p><strong>Option 2</strong> - Using Parameter Object</p>
<pre><code>$parameters = @{"&lt;ParameterName&gt;"="&lt;Parameter Value&gt;"}
New-AzureRmResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName TestResourceGroup01 -TemplateFile &lt;PathToTemplate&gt; -TemplateParameterObject $parameters</code></pre>
<p><strong>Option 3</strong> - Using Parameter file which is in local environment</p>
<pre><code>New-AzureRmResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName TestResourceGroup01 -TemplateFile &lt;PathToTemplate&gt; -TemplateParameterFile &lt;PathToParameterFile&gt;</code></pre>
<p><strong>Option 4</strong> - Using Parameter file which is located externally and can be referenced via Link</p>
<pre><code>New-AzureRmResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName TestResourceGroup01 -TemplateUri &lt;LinkToTemplate&gt; -TemplateParameterUri &lt;LinkToParameterFile&gt;</code></pre>
<h3>Key Gotchas</h3>
<ol>
<li>If you provide values for a parameter in both the local parameter file and inline, the inline value takes precedence.</li>
<li>You cannot use inline parameters with an external parameter file. All inline parameters are ignored when you specify &quot;TemplateParameterUri&quot; parameter.</li>
<li>As a best practice, do not store sensitive infomation in the parameters file e.g. Local admin password. Instead either provide these dynamically using inline parameters. Or store them using the Azure Key vault and then reference the key vault in your parameters file.</li>
</ol>
<p>You can find more details about these cmdlets here: <a href="https://azure.microsoft.com/en-us/documentation/articles/resource-group-template-deploy/#deploy-with-powershell">Deploy resources with Resource Manager templates and Azure PowerShell</a></p>
<p>In the next blog, we will see how to create a Parameters File for providing parameters dynamically to the template.</p>]]></description>
<link>https://HarvestingClouds.com/post/step-by-step-arm-templates-deploying-template-using-azure-powershell</link>
<pubDate>Sun, 11 Sep 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Step by Step ARM Templates - Deploying Template Using Azure Portal</title>
<description><![CDATA[<p><strong>Index</strong> of all blogs in this <strong>Step by Step ARM Templates series</strong> is located here: <a href="/post/step-by-step-azure-resource-manager-arm-templates-index/">Step by Step Azure Resource Manager (ARM) Templates - Index</a></p>
<p>In the last blog you learned <a href="/post/step-by-step-arm-templates-building-your-first-arm-template/">How to build your first ARM Template</a> . Now that you have a fully functional ARM template we want to deploy this template to Azure.</p>
<p>There are various options to deploy a template to Azure. Using Azure portal is by far the easiest and most intuitive option for the deployment. Follow the steps in this blog to deploy your template to Azure.</p>
<h2>Pre-requisites</h2>
<p>Things you should know before deployment</p>
<ol>
<li><strong>Azure Subscription</strong> - where you want to deploy your template</li>
<li><strong>Resource Group</strong> - This is the resource group in Azure where you will be deploying your template. You can create a new resource group (for the resources that will be deployed by the template) or use an existing one.</li>
<li><strong>Parameters</strong> - Value of the input parameters to the template should be known to you for the deployment. Follow all your naming conventions when defining the parameters for deployments of resources in Azure.</li>
</ol>
<h2>Steps for Deployment</h2>
<ol>
<li>First, log into the Azure Portal.</li>
<li>
<p>Next, go to &quot;New&quot; and type &quot;Template deployment&quot; in the search box and hit enter.</p>
<p><img src="/images/147559616157f3cf81381da.png" alt="New Deployment" /></p>
</li>
<li>
<p>Next, click on the <strong>Template Deployment</strong> and then click on &quot;Create&quot;</p>
<p><img src="/images/147559654057f3d0fc1bf5c.png" alt="Create Deployment" /></p>
</li>
<li>
<p>Now click on the &quot;<strong>Template (Edit Template)</strong>&quot;. It will open a panel to paste your template. Delete whatever is auto populated in the template area. Copy your whole json template and paste it here. Note that the left section in the new panel will update to show you what parameters, variables, and resources you have in the template. Click on &quot;Save&quot; once done.</p>
<p><img src="/images/147559697957f3d2b32ff67.png" alt="Editing Template" /></p>
</li>
<li>
<p>Next, click on the &quot;<strong>Parameters (Edit Parameters)</strong>&quot; on the left side. The parameters will be automatically picked from the template. The parameters for which the default value is provided will be automatically populated. Rest you will have to provide the inputs. Click Ok once done.</p>
<p><img src="/images/147559719657f3d38c652e3.png" alt="Providing Parameters" /></p>
</li>
<li>
<p>Next, you have the option to select the <strong>Resource Group</strong>. You can either create a new resource group (for all the resources that will be deployed via the template) or you can use and existing resource group.</p>
<p><img src="/images/147559727457f3d3da45879.png" alt="Resource Group Selection" /></p>
</li>
<li>
<p>The last option is to click on the &quot;<strong>Legal Terms</strong>&quot; and read through the terms. If you agree then click on the &quot;Purchase&quot; button. </p>
<p><img src="/images/147559762057f3d534eae90.png" alt="Legal Terms" /></p>
</li>
<li>Finally, click on the <strong>Create</strong> to submit the deployment. </li>
</ol>
<p>You can monitor the job performing the deployments and progress of the same. After some time the deployment will finish successfully and you can view the resources in the resource group you selected.</p>]]></description>
<link>https://HarvestingClouds.com/post/step-by-step-arm-templates-deploying-template-using-azure-portal</link>
<pubDate>Thu, 08 Sep 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Step by Step ARM Templates - Building your first ARM Template</title>
<description><![CDATA[<p><strong>Index</strong> of all blogs in this <strong>Step by Step ARM Templates series</strong> is located here: <a href="/post/step-by-step-azure-resource-manager-arm-templates-index/">Step by Step Azure Resource Manager (ARM) Templates - Index</a></p>
<p>In this blog post, we will use the knowledge learned in previous blogs and will build a basic ARM template.
If you haven't checked previous blog posts then have a quick read of your preferred topics here: <a href="/post/step-by-step-azure-resource-manager-arm-templates-index/">Step by Step Azure Resource Manager (ARM) Templates - Index</a></p>
<p>To follow this blog, you can use any text editor which can provide JSON syntax highlighting. We will be looking at using Visual Studio to author ARM templates in a future blog post. Visual Studio can provide JSON outlining and is a very powerful tool for authoring ARM templates.</p>
<p>Let us assume that you want to deploy a storage account and build a virtual network in Azure. You want to automate the process and need to repeat the process in various environments. ARM templates fit the bill for the solution of this problem.</p>
<p>In the next few sections, we will build each section of the template and then at the end will check the complete template.</p>
<h3>1. Template Header</h3>
<p>This section is very basic and contains just the schema and the content version. You can use the content version to manage the development versions of the template as you make changes to your templates in the future.</p>
<pre><code>"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",</code></pre>
<h3>2. Parameters</h3>
<p>Here we define all the inputs we need from the end users. We provide default values for those parameters for which we know what the most common values will be based on our environment. For the current template we define two parameters:</p>
<ul>
<li><strong>vhdStorageName</strong> - This is the name of the storage account in Azure which will be created by the deployment of this template.</li>
<li><strong>virtualNetworkName</strong> - This is the name of the Virtual Network which will be created by the deployment of this template.</li>
</ul>
<p>As a best practice, provide the metadata, describing what each parameter is for. Also, note that we have used pascal casing to name the parameters with very descriptive names.</p>
<pre><code>"parameters": {
    "vhdStorageName": {
        "type": "string",
        "minLength": 1,
        "defaultValue": "mystorage101",
        "metadata": {
            "description": "Name of the Storage Account."
        }
    },
    "virtualNetworkName": {
        "type": "string",
        "metadata": {
            "description": "Name of the virtual network."
        }
    }
},</code></pre>
<h3>3. Variables</h3>
<p>Next, we add some variables for the values which will be reused later in the template in the resources section. We create variables for all those reusable values for which we know what their value at deployment will be. We define 4 variables in this template:</p>
<ul>
<li><strong>addressPrefix</strong> - Address prefix for the Virtual Network </li>
<li><strong>subnetName</strong> - Subnet name which will be created under the virtual network</li>
<li><strong>subnetPrefix</strong> - Subnet prefix for the subnet, which will be created under the virtual network</li>
<li><strong>vhdStorageType</strong> - Type of the storage account. Here we used Standard locally redundant storage (LRS)</li>
</ul>
<p>Variables section look as below:</p>
<pre><code>"variables": {
    "addressPrefix": "10.0.0.0/16",
    "subnetName": "Subnet",
    "subnetPrefix": "10.0.0.0/24",
    "vhdStorageType": "Standard_LRS"
},</code></pre>
<h3>4. Resources</h3>
<p>Now comes the last and main section i.e. Resources. Here we define both the resources for our template:</p>
<ul>
<li>Storage Account</li>
<li>Virtual Network</li>
</ul>
<p>Let us look at each of these resources one by one.</p>
<p><strong>A. Storage Account Resource</strong></p>
<p>This resource has below properties (or key-value pairs):</p>
<ol>
<li><strong>Type</strong> - Type of the resource is set to Microsoft.Storage/storageAccounts. This is what tells the Azure that the current resource is a Storage Account</li>
<li><strong>Name</strong> - This defines the name of the storage account to be deployed based on the parameter to the template</li>
<li><strong>API Version</strong> - this is the standard version for the REST API in Azure</li>
<li><strong>Location</strong> - This is the Azure location. The location is found dynamically based on the location of the resource group to which this template will be deployed.</li>
<li><strong>tags</strong> - only one tag is defined for the display name. You should have more tags in case of a production ready template</li>
<li><strong>properties</strong> - This is where you tell Azure what kind of storage account you need. Here the account type is set using the value of the variable vhdStorageType.</li>
</ol>
<p><strong>B. Virtual Network Resource</strong></p>
<p>This resource has below properties (or key-value pairs):</p>
<ol>
<li><strong>Type</strong> - Type of the resource is set to Microsoft.Network/virtualNetworks. This is what tells the Azure that the current resource is a Virtual Network</li>
<li><strong>Name</strong> - This defines the name of the virtual network to be deployed based on the parameter to the template</li>
<li><strong>API Version</strong> - this is the standard version for the REST API in Azure</li>
<li><strong>Location</strong> - This is the Azure location. The location is found dynamically based on the location of the resource group to which this template will be deployed.</li>
<li><strong>tags</strong> - only one tag is defined for the display name. You should have more tags in case of a production ready template</li>
<li><strong>properties</strong> - This is where you define the address space for the virtual network. You also define the subnet under the virtual network here.</li>
</ol>
<p>The resources section look like below:</p>
<pre><code>"resources": [
    {
        "type": "Microsoft.Storage/storageAccounts",
        "name": "[parameters('vhdStorageName')]",
        "apiVersion": "2015-06-15",
        "location": "[resourceGroup().location]",
        "tags": {
            "displayName": "StorageAccount"
        },
        "properties": {
            "accountType": "[variables('vhdStorageType')]"
        }
    },
    {
        "apiVersion": "2015-06-15",
        "type": "Microsoft.Network/virtualNetworks",
        "name": "[parameters('virtualNetworkName')]",
        "location": "[resourceGroup().location]",
        "tags": {
            "displayName": "VirtualNetwork"
        },
        "properties": {
            "addressSpace": {
                "addressPrefixes": [
                    "[variables('addressPrefix')]"
                ]
            },
            "subnets": [
                {
                    "name": "[variables('subnetName')]",
                    "properties": {
                        "addressPrefix": "[variables('subnetPrefix')]"
                    }
                }
            ]
        }
    }
]</code></pre>
<h3>Complete Template</h3>
<p>Here is the complete template build from all the sections discussed above. You can copy and use this template for testing and working along with next deployment posts.</p>
<pre><code>{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "vhdStorageName": {
            "type": "string",
            "minLength": 1,
            "defaultValue": "mystorage101",
            "metadata": {
                "description": "Name of the Storage Account."
            }
        },
        "virtualNetworkName": {
            "type": "string",
            "metadata": {
                "description": "Name of the virtual network."
            }
        }
    },
    "variables": {
        "addressPrefix": "10.0.0.0/16",
        "subnetName": "Subnet",
        "subnetPrefix": "10.0.0.0/24",
        "vhdStorageType": "Standard_LRS"
    },
    "resources": [
        {
            "type": "Microsoft.Storage/storageAccounts",
            "name": "[parameters('vhdStorageName')]",
            "apiVersion": "2015-06-15",
            "location": "[resourceGroup().location]",
            "tags": {
                "displayName": "StorageAccount"
            },
            "properties": {
                "accountType": "[variables('vhdStorageType')]"
            }
        },
        {
            "apiVersion": "2015-06-15",
            "type": "Microsoft.Network/virtualNetworks",
            "name": "[parameters('virtualNetworkName')]",
            "location": "[resourceGroup().location]",
            "tags": {
                "displayName": "VirtualNetwork"
            },
            "properties": {
                "addressSpace": {
                    "addressPrefixes": [
                        "[variables('addressPrefix')]"
                    ]
                },
                "subnets": [
                    {
                        "name": "[variables('subnetName')]",
                        "properties": {
                            "addressPrefix": "[variables('subnetPrefix')]"
                        }
                    }
                ]
            }
        }
    ]
}</code></pre>
<p>In the next blog, we will learn how to deploy this template.</p>]]></description>
<link>https://HarvestingClouds.com/post/step-by-step-arm-templates-building-your-first-arm-template</link>
<pubDate>Sun, 04 Sep 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Step by Step ARM Templates - Helper Functions</title>
<description><![CDATA[<p><strong>Index</strong> of all blogs in this <strong>Step by Step ARM Templates series</strong> is located here: <a href="/post/step-by-step-azure-resource-manager-arm-templates-index/">Step by Step Azure Resource Manager (ARM) Templates - Index</a></p>
<p>ARM Templates has various dynamic constructs called <strong>Helper Functions</strong> which can make your template more generic. These constructs reduce the hard coded values in your templates. You can use the information from this blog to make your existing templates more dynamic and start writing new templates with a much generic approach.</p>
<p>Let's look at the most important helper functions and their practical usage one by one. </p>
<h3>1. Resource Id - Resource Function</h3>
<p>You use this function to determine the ID of a resource. This is only used when the resource (whose ID is needed) is not being deployed in the current template and it already exists in Azure.</p>
<p>The generic syntax to use this is:</p>
<pre><code>resourceId ([subscriptionId], [resourceGroupName], resourceType, resourceName1, [resourceName2]...)</code></pre>
<p>Only required parameters of this helper function are resourceType and resourceName1.</p>
<p>These parameters are as follows:</p>
<ul>
<li>subscription ID - This is only needed if you want to refer a different subscription. Default value is the current subscription</li>
<li>resource Group Name - Name of the resource group where the resource exists. Default is the current resource group, in which you are deploying the template</li>
<li>resource Type - Type of resource including resource provider namespace</li>
<li>resource Name 1  - Name of the resource</li>
<li>resource Name 2  - Next resource name segment if resource is nested. E.g. a VM Extension</li>
</ul>
<p><strong>Example</strong></p>
<pre><code>"vnetId1": "[resourceId('AE06-Mgmt-RG','Microsoft.Network/virtualNetworks', parameters('virtualNetworkName'))]",
"vnetId2": "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]"</code></pre>
<p>The above example shows two ways of using the resource ID helper function to determine the Id of a virtual network. First one uses the resource group, resource type and resource name. Second example uses only the resource Type and resource name. Second example assumes the resource group to be same as the template being deployed to.</p>
<h3>2. Resource Group - Resource Function</h3>
<p>This helper function returns an object that represents the current resource group to which the template is being deployed.</p>
<p>The generic syntax to use this is:</p>
<pre><code>resourceGroup()</code></pre>
<p>No parameters are needed in this helper function.</p>
<p><strong>Example</strong></p>
<pre><code>"vhdStorageName": "[concat('vhdstorage', uniqueString(resourceGroup().id))]",
 "storageAccountResourceGroup": "[resourcegroup().name]",
 "location": "[resourceGroup().location]"</code></pre>
<p>The above example shows 3 uses of the resource group helper functions. First one uses the ID of the resource group, second uses the name property and third uses the location for the current resource group.</p>
<h3>3. Subscription - Resource Function</h3>
<p>The generic syntax to use this is:</p>
<pre><code>subscription()</code></pre>
<p>No parameters are needed in this helper function.</p>
<p><strong>Example</strong></p>
<pre><code>"subscriptionId": "[subscription().subscriptionId]"</code></pre>
<p>The above example is straightforward. It fetches the subscription Id of the current subscription.</p>
<h3>4. Concat - String Function</h3>
<p>This function is used to concatinate (i.e. combine) two or more values.</p>
<p>The generic syntax to use this is:</p>
<pre><code>concat (array1, array2, array3, ...)</code></pre>
<p>At least 1 array is needed for concat to work. </p>
<p><strong>Example</strong></p>
<pre><code>"subnetRef": "[concat(variables('vNetId'), '/subnets/', variables('subnetName'))]"</code></pre>
<p>The above example combines (or concatinates) 3 text values. First value is the value of variable vNetId. Second value is a string &quot;/subnets/&quot;. Third value is the value of the variable subnet Name.</p>
<p>These are the most common Helper functions that you will use in 80%-90% of the templates. </p>
<p>To check the complete list of Helper Functions, check this official link: <a href="https://azure.microsoft.com/en-us/documentation/articles/resource-group-template-functions/#resource-functions">Azure Resource Manager template functions</a></p>]]></description>
<link>https://HarvestingClouds.com/post/step-by-step-arm-templates-helper-functions</link>
<pubDate>Wed, 31 Aug 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Step by Step ARM Templates - What is in an ARM Template - Understanding Components 5 - Outputs</title>
<description><![CDATA[<p><strong>Index</strong> of all blogs in this <strong>Step by Step ARM Templates series</strong> is located here: <a href="/post/step-by-step-azure-resource-manager-arm-templates-index/">Step by Step Azure Resource Manager (ARM) Templates - Index</a></p>
<p>If you haven't checked the previous blog on the overall structure of the ARM template, I suggest you give it a quick read before checking the component described in this post in detail.</p>
<ol>
<li><a href="../step-by-step-arm-templates-what-is-in-an-arm-template-understanding-all-components/">Understanding all components</a></li>
<li><a href="../step-by-step-arm-templates-what-is-in-an-arm-template-understanding-components-2-parameters/">Parameters</a></li>
<li><a href="../step-by-step-arm-templates-what-is-in-an-arm-template-understanding-components-3-variables/">Variables</a></li>
<li><a href="../step-by-step-arm-templates-what-is-in-an-arm-template-understanding-components-4-resources/">Resources</a></li>
<li><a href="../step-by-step-arm-templates-what-is-in-an-arm-template-understanding-components-5-outputs/">Outputs - This blog post</a></li>
</ol>
<h2>Outputs</h2>
<p>This section is used to output any values after the deployment of the ARM Template. This can output any Ids or connection strings based on the deployed resources. </p>
<p>This is a single JSON object with various output objects (just like Parameters. The overall JSON structure looks like below:</p>
<pre><code>"outputs": { 
    "output1" : {
                     "type":"string",
                     "value": "value1"
      },
    "output2" : {
                     "type":"string",
                     "value": "value2"
      },
}</code></pre>
<p>Each output object has 2 properties:</p>
<ol>
<li>Type - Data type of the output</li>
<li>Value - value of the output</li>
</ol>
<p>A real life example with look like below:</p>
<pre><code>"outputs": {
    "adminUsername": {
        "type": "string",
        "value": "[parameters('adminUsername')]"
    }
}</code></pre>
<p>The above example will output the administrator Username using the parameter from the template.</p>
<p>That's all there is to Outputs in ARM Templates. If you have any doubts, please comment in the below section. Use the links at the Top to know all about other components in an ARM Template.</p>]]></description>
<link>https://HarvestingClouds.com/post/step-by-step-arm-templates-what-is-in-an-arm-template-understanding-components-5-outputs</link>
<pubDate>Tue, 30 Aug 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Step by Step ARM Templates - What is in an ARM Template - Understanding Components 4 - Resources</title>
<description><![CDATA[<p><strong>Index</strong> of all blogs in this <strong>Step by Step ARM Templates series</strong> is located here: <a href="/post/step-by-step-azure-resource-manager-arm-templates-index/">Step by Step Azure Resource Manager (ARM) Templates - Index</a></p>
<p>If you haven't checked the previous blog on the overall structure of the ARM template, I suggest you give it a quick read before checking the component described in this post in detail.</p>
<ol>
<li><a href="../step-by-step-arm-templates-what-is-in-an-arm-template-understanding-all-components/">Understanding all components</a></li>
<li><a href="../step-by-step-arm-templates-what-is-in-an-arm-template-understanding-components-2-parameters/">Parameters</a></li>
<li><a href="../step-by-step-arm-templates-what-is-in-an-arm-template-understanding-components-3-variables/">Variables</a></li>
<li><a href="../step-by-step-arm-templates-what-is-in-an-arm-template-understanding-components-4-resources/">Resources  - This blog post</a></li>
<li><a href="../step-by-step-arm-templates-what-is-in-an-arm-template-understanding-components-5-outputs/">Outputs</a></li>
</ol>
<h2>Resources</h2>
<p>This is the major section of the whole ARM template. This is where you define what resources should be deployed in Azure. You also define dependencies between resources in this section. </p>
<p>The resources section consist of an array of JSON Objects as shown below:</p>
<pre><code>"resources": [
        { },
        { },
]</code></pre>
<p>Each object in the array (represented via curly braces) is an Azure resource. You can deploy multiple resources in a single ARM template. E.g. You can deploy a new Storage Account, new Virtual Network and three Virtual Machines in that virtual network within a single template.
Within the object, various properties (and nested properties) are used to provide the configurations of each resource. </p>
<h3>Elements</h3>
<p>Different elements in a single resource object can be one of the following:</p>
<ol>
<li><strong>apiVersion</strong> - <strong><em>Required</em></strong> - Version of the API. e.g. &quot;2015-06-15&quot;</li>
<li><strong>type</strong> - <strong><em>Required</em></strong> - Type of the resource. This value is a combination of the namespace of the resource provider and the resource type that the resource provider supports. e.g. Azure Storage Account will have type as &quot;Microsoft.Storage/storageAccounts&quot;.</li>
<li><strong>name</strong> - <strong><em>Required</em></strong> - Name of the resource. The name must follow URI component restrictions and also the Azure naming restrictions if any. E.g. Storage account name can only be in small letters and has to be unique.</li>
<li><strong>location</strong> - Optional - Use supported geo-locations of the provided resource without any spaces. Or use the resource group's location dynamically.</li>
<li><strong>tags</strong> - Optional - Tags that are associated with the resource.</li>
<li><strong>dependsOn</strong> - Optional - Other resources in the same template, that the current resource being defined depends on. The dependencies between resources are evaluated and resources are deployed in their dependent order. When resources are not dependent on each other, they are attempted to be deployed in parallel. The value can be a comma-separated list of resource names or resource unique identifiers.</li>
<li><strong>properties</strong> - Optional - Resource specific configuration settings. E.g. Account type property for a storage account name.</li>
<li><strong>resources</strong> - Optional - Child resources that depend on the resource being defined. E.g. Extension resources for a Virtual Machine resource.</li>
</ol>
<h3>Examples</h3>
<p>Let's look at two examples. First, we will take a simple resource example to deploy a storage account in Azure:</p>
<pre><code>{
            "type": "Microsoft.Storage/storageAccounts",
            "name": "[variables('vhdStorageName')]",
            "apiVersion": "2015-06-15",
            "location": "[resourceGroup().location]",
            "tags": {
                "displayName": "StorageAccount",
                "department" : "Finance",
                "application" : "database"
            },
            "properties": {
                "accountType": "[variables('vhdStorageType')]"
            }
        }</code></pre>
<p>Above example will deploy a storage account with the name from &quot;vhdStorageName&quot; variable. It will apply 3 tags to the resource after deployment. It will use the account type (i.e. standard or premium) based on the value of the &quot;vhdStorageType&quot; variable. If you want to deploy 2 or more similar storage accounts, then just copy and paste the json for the resource, separated by comma. It will become another object in the Resources array.</p>
<p>Now let's look at a complex and larger example of deploying a single virtual machine with one extension for Diagnostics.</p>
<pre><code>    {
        "apiVersion": "2015-06-15",
        "type": "Microsoft.Compute/virtualMachines",
        "name": "[variables('vmName')]",
        "location": "[resourceGroup().location]",
        "tags": {
            "displayName": "VirtualMachine"
        },
        "dependsOn": [
            "[concat('Microsoft.Storage/storageAccounts/', variables('vhdStorageName'))]",
            "[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]"
        ],
        "properties": {
            "hardwareProfile": {
                "vmSize": "[variables('vmSize')]"
            },
            "osProfile": {
                "computerName": "[variables('vmName')]",
                "adminUsername": "[parameters('adminUsername')]",
                "adminPassword": "[parameters('adminPassword')]"
            },
            "storageProfile": {
                "imageReference": {
                    "publisher": "[variables('imagePublisher')]",
                    "offer": "[variables('imageOffer')]",
                    "sku": "[parameters('windowsOSVersion')]",
                    "version": "latest"
                },
                "osDisk": {
                    "name": "osdisk",
                    "vhd": {
                        "uri": "[concat('http://', variables('vhdStorageName'), '.blob.core.windows.net/', variables('vhdStorageContainerName'), '/', variables('OSDiskName'), '.vhd')]"
                    },
                    "caching": "ReadWrite",
                    "createOption": "FromImage"
                }
            },
            "networkProfile": {
                "networkInterfaces": [
                    {
                        "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]"
                    }
                ]
            },
            "diagnosticsProfile": {
                "bootDiagnostics": {
                    "enabled": true,
                    "storageUri": "[concat('http://', variables('diagnosticsStorageAccountName'), '.blob.core.windows.net')]"
                }
            }
        },
        "resources": [
            {
                "type": "extensions",
                "name": "Microsoft.Insights.VMDiagnosticsSettings",
                "apiVersion": "2015-06-15",
                "location": "[resourceGroup().location]",
                "tags": {
                    "displayName": "AzureDiagnostics"
                },
                "dependsOn": [
                    "[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]"
                ],
                "properties": {
                    "publisher": "Microsoft.Azure.Diagnostics",
                    "type": "IaaSDiagnostics",
                    "typeHandlerVersion": "1.5",
                    "autoUpgradeMinorVersion": true,
                    "settings": {
                        "xmlCfg": "[base64(concat(variables('wadcfgxstart'), variables('wadmetricsresourceid'), variables('wadcfgxend')))]",
                        "storageAccount": "[variables('diagnosticsStorageAccountName')]"
                    },
                    "protectedSettings": {
                        "storageAccountName": "[variables('diagnosticsStorageAccountName')]",
                        "storageAccountKey": "[listkeys(variables('accountid'), '2015-06-15').key1]",
                        "storageAccountEndPoint": "https://core.windows.net"
                    }
                }
            }
        ]
    }</code></pre>
<p>Note that the above code snippet defines a single virtual machine. Let us decode various sections of this complex resource:</p>
<ul>
<li>It begins with simple properties like apiVersion, type, name, location and tags as discussed in the previous example. These are straightforward and thus values are provided to these attributes.</li>
<li>Next is the <strong>dependsOn</strong> section. This defines the dependency between resources. In the above example, the virtual machine resource is dependent on the storage account and a network interface, which are also defined in the template. These 2 resources will be created before the virtual machine creation/deployment. If these resources are not created in the template then it will check for the presence of these resources in the current subscription. If they are not present the template will not get deployed and will error out.</li>
<li>Next are various <strong>properties</strong> to configure the Virtual machine, like hardware profile, os profile, storage profile, os disk, network profile, diagnostics profile etc.</li>
<li>Next, we have additional <strong>sub-resources</strong>. These are Azure resources which will be created and linked to the current resource. Only one sub-resource is created in the above example which is an extension for VM diagnostics settings.</li>
</ul>
<p>That's all there is to Resources in ARM Templates. If you have any doubts, please comment in the below section. Use the links at the Top to know all about other components in an ARM Template.</p>]]></description>
<link>https://HarvestingClouds.com/post/step-by-step-arm-templates-what-is-in-an-arm-template-understanding-components-4-resources</link>
<pubDate>Mon, 29 Aug 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Step by Step ARM Templates - What is in an ARM Template - Understanding Components 3 - Variables</title>
<description><![CDATA[<p><strong>Index</strong> of all blogs in this <strong>Step by Step ARM Templates series</strong> is located here: <a href="/post/step-by-step-azure-resource-manager-arm-templates-index/">Step by Step Azure Resource Manager (ARM) Templates - Index</a></p>
<p>If you haven't checked the previous blog on the overall structure of the ARM template, I suggest you give it a quick read before checking the component described in this post in detail.</p>
<ol>
<li><a href="../step-by-step-arm-templates-what-is-in-an-arm-template-understanding-all-components/">Understanding all components</a></li>
<li><a href="../step-by-step-arm-templates-what-is-in-an-arm-template-understanding-components-2-parameters/">Parameters</a></li>
<li><a href="../step-by-step-arm-templates-what-is-in-an-arm-template-understanding-components-3-variables/">Variables - This blog post</a></li>
<li><a href="../step-by-step-arm-templates-what-is-in-an-arm-template-understanding-components-4-resources/">Resources</a></li>
<li><a href="../step-by-step-arm-templates-what-is-in-an-arm-template-understanding-components-5-outputs/">Outputs</a></li>
</ol>
<h2>Variables</h2>
<p>Variables are values that you either know beforehand or you can construct from the input parameters. These variables can then be reused at multiple locations in the resources section. If you later want to change the value of a variable then it automatically gets updated at all other locations. They can be used to define a resource property.</p>
<h3>Defining Variables</h3>
<p>Variable is a one huge JSON object. Each property can be one of the simple data type (like integer, bool, string etc.) or can be another complex JSON object. The general structure is as shown below:</p>
<pre><code>"variables": {
      "variable 1" : "value 1",
      "variable 2" : "value 2",
      "variable 3" : 1024,
      "variable 4" : {}
}</code></pre>
<p>Note that in the above example, the first 3 variables are of simple value type. The 4rth variable is however of a complex JSON object type.</p>
<p>Let's now check a real variables section from an actual ARM template:</p>
<pre><code>"variables": {
        "vmSize": "Standard_A2",
        "virtualNetworkName": "MyVNETName",
        "vnetId1": "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]",
        "vnetId2": "[resourceId(parameters('vNetRG'),'Microsoft.Network/virtualNetworks',parameters('virtualNetworkName'))]",
        "subnetRef": "[concat(variables('vnetId'), '/subnets/', variables('subnetName'))]",
        "vhdStorageName": "[concat('vhdstorage', uniqueString(resourceGroup().id))]",
        "storageAccountResourceGroup": "[resourcegroup().name]",
        "location": "[resourceGroup().location]",
        "subscriptionId": "[subscription().subscriptionId]"
    }</code></pre>
<p>There are lots of key constructs in the above code snippet. I have tried to capture as many different constructs in this snippets as I could. Let us decode each variable one by one.</p>
<ol>
<li>vmSize - Simple String</li>
<li>virtualNetworkName - Simple string name</li>
<li>vnetId1 - This uses a special function named &quot;<strong>resourceId</strong>&quot; to find out the resource ID of the virtual network. This function is invoked by using the syntax <code>"[resourceId(Input)]"</code> .  This gets the resource ID of a resource which is defined by the Input to this. Also, note the use of another variable as an input to this.</li>
<li>vnetId2 - This also fetches the resource Id of a virtual network using &quot;resourceId&quot; method. Note the use of the value of a parameter in this to find out Resource Group of the existing Virtual network (parameter &quot;vNetRG&quot;).</li>
<li>subnetRef - This variable uses another function &quot;<strong>concat</strong>&quot; in ARM template i.e. <code>"[concat(input1,input2,...)]"</code>. This function can take many inputs and will concatinate (i.e. club together) the value of all the inputs provided. You can use parameters or another variable.</li>
<li>vhdStorageName - This also uses concat function to dynamically generate a storage name. However it uses &quot;<strong>resourcegroup</strong>&quot; function as <code>"[resourcegroup()]"</code>. This function always returns the resource group to which you are deploying the current ARM template. Then the variable uses the id property of the resource group returned.</li>
<li>storageAccountResourceGroup - This uses the &quot;name&quot; property of the current resource group</li>
<li>location - This uses the &quot;location&quot; property of the current resource group.</li>
<li>subscriptionId - This uses the &quot;<strong>subscription</strong>&quot; function as <code>"[subscription()]"</code> to find out the current subscription to which the current ARM template is being deployed.  Then it uses the subscription Id property of the subscription to get the required Id.</li>
</ol>
<p>Note that these constructs are very powerfull and can be used to dynamically construct your ARM template. These constructs are also known as Helper Functions and are explained in detail here: <a href="../step-by-step-arm-templates-helper-functions/">Step by Step ARM Templates - Helper Functions</a></p>
<h3>Using Variables</h3>
<p>Using variables is very easy and is similar to using parameters. In fact, you already saw the usage of variables above, while defining other variables.</p>
<p>You use the square parenthesis to indicate to the ARM engine to evaluate whatever is inside the parenthesis. You use the &quot;variable&quot; keyword and then you pass the name of the variable as input. Check the example below.</p>
<pre><code>"storageAccountName": "[variables('storageAccountName')]"</code></pre>
<h3>Best Practices</h3>
<p>Best practices are similar to the Parameters.</p>
<ul>
<li>Provide complete descriptive names, no matter how long.</li>
<li>Use <strong>Pascal Casing</strong> to name your parameters. i.e. First letter should be a small letter. Then every new word will have the first letter as a capital. No space between words. E.g. storageAccountName</li>
<li>Use the constructs explained in the previous section to dynamically generate variables. This reduces any human errors.</li>
<li>Anything that is used more than once and is not required to be entered by an end user, should be created as a variable. Later on, this helps by minimizing the number of places you need to change the value.</li>
</ul>
<p>That's all there is to Variables in ARM Templates. If you have any doubts, please comment in the below section. Use the links at the Top to know all about other components in an ARM Template.</p>]]></description>
<link>https://HarvestingClouds.com/post/step-by-step-arm-templates-what-is-in-an-arm-template-understanding-components-3-variables</link>
<pubDate>Fri, 26 Aug 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Step by Step ARM Templates - What is in an ARM Template - Understanding Components 2 - Parameters</title>
<description><![CDATA[<p><strong>Index</strong> of all blogs in this <strong>Step by Step ARM Templates series</strong> is located here: <a href="/post/step-by-step-azure-resource-manager-arm-templates-index/">Step by Step Azure Resource Manager (ARM) Templates - Index</a></p>
<p>If you haven't checked the previous blog on the overall structure of the ARM template, I suggest you give it a quick read before checking the component described in this post in detail.</p>
<ol>
<li><a href="../step-by-step-arm-templates-what-is-in-an-arm-template-understanding-all-components/">Understanding all components</a></li>
<li><a href="../step-by-step-arm-templates-what-is-in-an-arm-template-understanding-components-2-parameters/">Parameters - This blog post</a></li>
<li><a href="../step-by-step-arm-templates-what-is-in-an-arm-template-understanding-components-3-variables/">Variables</a></li>
<li><a href="../step-by-step-arm-templates-what-is-in-an-arm-template-understanding-components-4-resources/">Resources</a></li>
<li><a href="../step-by-step-arm-templates-what-is-in-an-arm-template-understanding-components-5-outputs/">Outputs</a></li>
</ol>
<h2>Parameters</h2>
<p>As mentioned earlier, parameters are the way to customize the templates, each time you deploy it to create resources in Azure. These parameters are the end-user inputs for various aspects of the template. E.g. If you are deploying an Azure Virtual Machine via an ARM Template then the name of the VM can be an input parameter. Operating System type can be another parameter.</p>
<p>The parameters can be referred and used in other parts of the ARM Template.</p>
<h3>1. Defining Parameters</h3>
<p>Parameters is a one huge JSON object with multiple JSON properties. Each property is one parameter which is represented as another JSON object. Let us look at its structure at a high level.</p>
<pre><code>"parameters": {
               "parameter 1" : {},
               "parameter 2" : {},
               "parameter 3" : {}
}</code></pre>
<p>E.g. If you are creating a template to deploy a Windows Virtual Machine then the parameters will look something like below:</p>
<pre><code>"parameters": {
               "VMName" : {},
               "AdminUserName" : {},
               "AdminPassword" : {},
               "WindowsOSVersion" : {}
}</code></pre>
<p>Now let us look at one of the parameters. E.g. The AdminUserName parameter will look like:</p>
<pre><code>"adminUsername": {
            "type": "string",
            "minLength": 1,
            "metadata": {
                "description": "Username for the Virtual Machine."
            }
        }</code></pre>
<p>The parameter object, as shown above, has following parts:</p>
<ol>
<li><strong>Type</strong> - This is the data Type of the parameter.</li>
<li><strong>minLength</strong> - This is the minimum length the parameter must have</li>
<li><strong>Metadata</strong> - This is just to provide a description as to what the parameter means.</li>
</ol>
<p>The <strong>Data Type</strong> allowed for the parameter are:</p>
<ul>
<li>string or secureString – any valid JSON string</li>
<li>int – any valid JSON integer</li>
<li>bool – any valid JSON boolean </li>
<li>object – any valid JSON object </li>
<li>array – any valid JSON array</li>
</ul>
<p>A more complex parameter e.g. Windows OS Version, with few more properties, is shown below:</p>
<pre><code>"windowsOSVersion": {
            "type": "string",
            "defaultValue": "2012-R2-Datacenter",
            "allowedValues": [
                "2008-R2-SP1",
                "2012-Datacenter",
                "2012-R2-Datacenter"
            ],
            "metadata": {
                "description": "The Windows version for the VM. This will pick a fully patched image of this given Windows version. Allowed values: 2008-R2-SP1, 2012-Datacenter, 2012-R2-Datacenter."
            }
        }</code></pre>
<p>It has additional below properties:</p>
<ol>
<li><strong>Default Value</strong> - This is the default value. End User will be able to change this value when deploying the template. If no value is provided then this value is picked.</li>
<li><strong>Allowed Values</strong> - This is an Array of values which are allowed for the parameter. Only value from this set is allowed as an input.</li>
</ol>
<h3>2. Using Parameters</h3>
<p>Using parameters is easy. Wherever in your template (in variables or resources section) you want to use the value of a parameter, just use the parameter function as shown below with the name of the parameter as input, enclosed in square brackets. </p>
<pre><code>[parameters('windowsOSVersion')]</code></pre>
<p>If the parameter value is assigned to a property, enclosing it in double quotes, as shown below:</p>
<pre><code>"sku": "[parameters('windowsOSVersion')]"</code></pre>
<h3>3. Best Practices</h3>
<ul>
<li>Try to always provide Default Values</li>
<li>Provide metadata so that you can provide insight as to what the parameter is meant for</li>
<li>Provide complete descriptive names, no matter how long.</li>
<li>Use <strong>Pascal Casing</strong> to name your parameters. i.e. First letter should be a small letter. Then every new word will have the first letter as a capital. No space between words. E.g. windowsOSVersion</li>
<li>Use properties like minLength and Allowed values to impose restrictions. This reduces any human errors.</li>
</ul>
<p>That's all there is to Parameters in ARM Templates. If you have any doubts, please comment in the below section. Use the links at the Top to know all about other components in an ARM Template.</p>]]></description>
<link>https://HarvestingClouds.com/post/step-by-step-arm-templates-what-is-in-an-arm-template-understanding-components-2-parameters</link>
<pubDate>Wed, 24 Aug 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Step by Step ARM Templates - What is in an ARM Template - Understanding All Components</title>
<description><![CDATA[<p><strong>Index</strong> of all blogs in this <strong>Step by Step ARM Templates series</strong> is located here: <a href="/post/step-by-step-azure-resource-manager-arm-templates-index/">Step by Step Azure Resource Manager (ARM) Templates - Index</a></p>
<p>As we discussed <a href="../step-by-step-azure-resource-manager-arm-templates-index/">earlier in the introduction</a> <strong>Azure Resource Manager (ARM) Template</strong> is a JavaScript Object Notation (JSON) file that defines one or more resources to deploy to a resource group. It also defines the dependencies between the deployed resources.</p>
<p>In this post, we will deconstruct any basic ARM template and will understand it's various components.</p>
<p>Any ARM Template will look like below:</p>
<pre><code>{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {},
    "variables": {},
    "resources": [ {}, {} ]
}</code></pre>
<p>Snapshot of the Template at root level, as generated via Visual Studio:</p>
<p><img src="https://HarvestingClouds.com/images/147520488857edd71873f9b.png" alt="ARM Template Components" /></p>
<p>As you can see the components (or properties) of any ARM template includes:</p>
<ol>
<li>Schema</li>
<li>Content Version</li>
<li>Parameters</li>
<li>Variables</li>
<li>Resources</li>
</ol>
<p>Let's look at these in more detail.</p>
<table border="1" cellpadding="4" cellspacing="4">
        <colgroup>
            <col>
            <col>
            <col>
            <col>
        </colgroup>
        <tbody valign="top">
            <tr>
                <th>Element name</th>
                <th>Required</th>
                <th>JSON Type</th>
                <th>Description</th>
            </tr>
            <tr>
                <td>$schema</td>
                <td>Yes</td>
                <td>String Value</td>
                <td>Location of the JSON schema file that describes the version of the template language.</td>
            </tr>
            <tr>
                <td>contentVersion</td>
                <td>Yes</td>
                <td>String Value</td>
                <td>Version of the template (such as 1.2.0.20). When deploying resources using the template, this value can be used to make sure that the right template is being used.</td>
            </tr>
            <tr>
                <td>parameters</td>
                <td>No</td>
                <td>JSON Object</td>
                <td>Values that are provided by the end user (manually or via a parameters file) when deployment is executed to customize resource deployment.</td>
            </tr>
            <tr>
                <td>variables</td>
                <td>No</td>
                <td>JSON Object</td>
                <td>Values that are reused multiple times in the template. You can update these values. They are different from Parameters as their value is known and they are not required as inputs from the end user.</td>
            </tr>
            <tr>
                <td>resources</td>
                <td>Yes</td>
                <td>Array of Objects</td>
                <td>Types of services that are deployed or updated in a resource group. Each JSON object in this Array denotes an Azure Resource.</td>
            </tr>
            <tr>
                <td>outputs</td>
                <td>No</td>
                <td>JSON Object</td>
                <td>Values that are returned after deployment.</td>
            </tr>
        </tbody>
</table>
<p>Now that you know what each part is at a high level, in the next posts, we will look at the key 4 components in detail.</p>
<ol>
<li><a href="../step-by-step-arm-templates-what-is-in-an-arm-template-understanding-components-2-parameters/">Parameters</a></li>
<li><a href="../step-by-step-arm-templates-what-is-in-an-arm-template-understanding-components-3-variables/">Variables</a></li>
<li><a href="../step-by-step-arm-templates-what-is-in-an-arm-template-understanding-components-4-resources/">Resources</a></li>
<li><a href="../step-by-step-arm-templates-what-is-in-an-arm-template-understanding-components-5-outputs/">Outputs</a></li>
</ol>]]></description>
<link>https://HarvestingClouds.com/post/step-by-step-arm-templates-what-is-in-an-arm-template-understanding-all-components</link>
<pubDate>Mon, 22 Aug 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Step by Step ARM Templates - JSON 101 for IT Administrators</title>
<description><![CDATA[<p><strong>Index</strong> of all blogs in this <strong>Step by Step ARM Templates series</strong> is located here: <a href="/post/step-by-step-azure-resource-manager-arm-templates-index/">Step by Step Azure Resource Manager (ARM) Templates - Index</a></p>
<p>Azure Resource Manager (ARM) templates are written in JSON or <strong>JavaScript Object Notation</strong>. To understand ARM templates, you need to understand few quick basics about JSON. These will enable you to lay a great foundation which will enable you to understand ARM templates very easily.</p>
<p>JSON or JavaScript Object Notation (pronounce like &quot;Jay-son&quot;) is a text-based data format that's designed to be human-readable, lightweight, and easy to transmit between a server and a web client. Its syntax is derived from JavaScript. Think of this as an even more compact version of XML files.</p>
<p>JSON is a popular notation for transmitting data through RESTful web services. The official internet media type for JSON is <code>application/json</code>, and JSON files typically have a <code>.json</code> extension.</p>
<p>To understand JSON we need to understand <strong>3 main components</strong>. These components are like building blocks, using which you can build very complex JSON files.</p>
<h2>1. Objects</h2>
<p>Objects are the heart of JSON. Object denotes a real life object, e.g. an Employee. Just like a real life object, these have various properties and a value for each of these properties. E.g. An Employee will have Name property with value as John. Further, an employee object can have various another properties like Age, Salary, Department etc. So to denote an object in JSON you:</p>
<ul>
<li>One object will be represented by curly brackets. It will begin from opening curly bracket i.e. <code>{</code> and will end at closing curly bracket i.e. <code>}</code></li>
<li>Denote the property and corresponding values as <code>"key" : "value"</code> or <code>"property" : "value"</code> pairs.</li>
<li>You can only use double quotes for Properties as they will always be of type string</li>
<li>You will have double quotes around Values if they are of string type. You will not have any quotes in case of a number or a boolean value.</li>
<li>Each property will be separated from next property by a comma</li>
</ul>
<p><strong>Note:</strong> Each JSON file is also a single JSON object. At root level it starts with an opening curly bracket i.e. <code>{</code> and will end with closing curly bracket i.e. <code>}</code>. There can't be any other objects at the root level. Think of this similar to how in an XML file there can be only one element at the root level. </p>
<p>Example Employee object is shown below:</p>
<pre><code>{
    "Name" : "John",
    "Age" : 34,
    "Department" : "Finance",
    "Salary" : "100000",
    "IsAdmin" : true
}</code></pre>
<h2>2. Arrays</h2>
<p>Simply put, arrays are a collection of items. In JSON the <strong>square brackets</strong> represents an Array. E.g. An array of 3 employees will look like below:</p>
<pre><code>[
  {
        "Name" : "John",
        "Age" : 34
    },
   {
        "Name" : "Mary",
        "Age" : 32
    },
   {
        "Name" : "Matthew",
        "Age" : 29
    }
]</code></pre>
<h2>3. Nesting of Objects</h2>
<p>Now things get more interesting with nesting of Objects. What Nesting means is that one object can have it's property as another complex object. Don't worry if that sounds confusing. Let's understand that statement using an example. An Address where a person lives can be represented by an object. This object will look like below:</p>
<pre><code>{
  "StreetNumber" : "50",
  "StreetName" : "Brian Harrison Way",
  "Unit Number" : 22,
  "City" : "Toronto",
  "Country" : "Canada"
}</code></pre>
<p>Now an Employee Object will have an Address object as one of it's property (because employee need to live somewhere). This new complex Employee object will look like below, with nested Address object as one of it's property:</p>
<pre><code> {
        "Name" : "John",
        "Age" : 34,
        "Department" : "Finance",
        "Salary" : "100000",
        "IsAdmin" : true,
        "Address" :   {
                          "StreetNumber" : "50",
                          "StreetName" : "Brian Harrison Way",
                          "Unit Number" : 22,
                          "City" : "Toronto",
                          "Country" : "Canada"
                       }
    }</code></pre>
<p>That's all there is to it. Now you can use these 3 components and build very complex json files/templates. Even the most complex template can be broken into these 3 components. </p>
<p>Below is a complex example with all 3 components. </p>
<pre><code>{
    "Department": "Finance",
    "TotalEmployees": 2,
    "Employees": [
        {
            "Name": "John",
            "Age": 34,
            "Department": "Finance",
            "Salary": "100000",
            "IsAdmin": true,
            "Address": {
                "StreetNumber": "50",
                "StreetName": "Brian Harrison Way",
                "Unit Number": 22,
                "City": "Toronto",
                "Country": "Canada"
            }
        },
        {
            "Name": "John",
            "Age": 34,
            "Department": "Finance",
            "Salary": "100000",
            "IsAdmin": true,
            "Address": {
                "StreetNumber": "50",
                "StreetName": "Brian Harrison Way",
                "Unit Number": 22,
                "City": "Toronto",
                "Country": "Canada"
            }
        }
    ]
}</code></pre>
<p>The above JSON object denotes one department with name as Finance and total number of employees as 2. Then the &quot;Employees&quot; object is an array of 2 emplyees. Each emplyee object further have a complex property as Address, which is another object. </p>
<p>If you understood each of the 3 components, you should be able to build/understand most complex JSON files with ease.</p>]]></description>
<link>https://HarvestingClouds.com/post/step-by-step-arm-templates-json-101-for-it-administrators</link>
<pubDate>Wed, 17 Aug 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>ASR Setup for VMs running in VMWare without VMware level User Access</title>
<description><![CDATA[<h3>Problem Statement</h3>
<p>Recently I was setting up <strong>Azure Site Recovery</strong> (or ASR) in an environment. We had multiple VMs in VMWare environment. The environment was managed by Third Party who did not want to give any service account for VMWare as their environment was shared with different customers. So we had access only to the VMs. Without relevant access, ASR for VMWare was out of the question for us.</p>
<h3>Solution</h3>
<p>We treated the VMs, in such environment, as physical machines when setting up ASR to replicate these machines to Azure.
We used the option of &quot;<strong><em>Not virtualized/other</em></strong>&quot; in the &quot;<strong><em>Prepare Infrastructure</em></strong>&quot; wizard of ASR. We <strong>treated the VMs as physical servers</strong> and did not face any issues during the migration. </p>
<p>Refer below screenshot for the exact option discussed above.</p>
<p><img src="https://HarvestingClouds.com/images/14696297155798c513ea908.png" alt="Protection Goal" /></p>
<p>Later when enabling the Replication for any Server, run the &quot;<strong><em>Enable Replication</em></strong>&quot; wizard by clicking on &quot;<strong>+Replicate</strong>&quot; on the ASR vault's blade. Then select &quot;<strong>Machine Type</strong>&quot; as &quot;<strong><em>Physical Machine</em></strong>&quot; and add the Physical Machines by mentioning their IP addresses. </p>
<p><strong>Note:</strong> For this approach to work, the Configuration server should be on the same network as the VM being considered as Physical Machine.</p>
<p>We were able to migrate many servers successfully and without any issues using this approach.</p>]]></description>
<link>https://HarvestingClouds.com/post/asr-setup-for-vms-running-in-vmware-without-vmware-level-user-access</link>
<pubDate>Tue, 26 Jul 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Uploading and Downloading files securely from Azure Storage Blob via PowerShell</title>
<description><![CDATA[<p><strong>Azure blob storage</strong> can provide a very highly available way to store your files in the cloud. You can dynamically add or remove the files in an automated fashion. These files can then be used for any number of purposes. E.g. A parameter file for ARM template can be kept in Azure blob storage and then dynamically read while creating resources from an ARM template.</p>
<p><strong>The whole process can be broken down into 3 parts</strong>:</p>
<ol>
<li>Generating the context to the storage container</li>
<li>Uploading the files using the context</li>
<li>Downloading the files using the context</li>
</ol>
<h3>1. Generating the context to the storage container</h3>
<p>The context to the storage blob container can be created in one of the 3 ways, based on your security requirements. All methods use the <code>New-AzureStorageContext</code> cmdlet to generate the storage context. The methods differ on how you pass the parameters to this cmdlet.</p>
<p><strong>A. Via fetching the Azure Storage Key</strong></p>
<p>This first method uses the <code>Get-AzureStorageKey</code> to fetch the storage key. This key is then used to generate the context as shown below.</p>
<pre><code>$StorageAccountName = "yourstorageaccount"
$StorageAccountKey = Get-AzureStorageKey -StorageAccountName $StorageAccountName
$Ctx = New-AzureStorageContext $StorageAccountName -StorageAccountKey $StorageAccountKey.Primary</code></pre>
<p><strong>B. Via fetching the Azure Storage Container SAS Token</strong></p>
<p>This second method uses the <code>New-AzureStorageContainerSASToken</code> to create a new SAS token to securely access the storage container. This token is then used to generate the context as shown below.</p>
<pre><code>$sasToken = New-AzureStorageContainerSASToken -Container abc -Permission rl
$Ctx = New-AzureStorageContext -StorageAccountName $StorageAccountName -SasToken $sasToken</code></pre>
<p><strong>C. Via Connectin String</strong></p>
<p>This third method uses a connection string, entered manually, which is then used to generate the context as shown below.</p>
<pre><code>$ConnectionString = "DefaultEndpointsProtocol=http;BlobEndpoint=&lt;blobEndpoint&gt;;QueueEndpoint=&lt;QueueEndpoint&gt;;TableEndpoint=&lt;TableEndpoint&gt;;AccountName=&lt;AccountName&gt;;AccountKey=&lt;AccountKey&gt;"
$Ctx = New-AzureStorageContext -ConnectionString $ConnectionString</code></pre>
<h3>2. Uploading the files using the context</h3>
<p>Now that you have the context to the storage account you can upload and download files from the storage blob container.
Use the below code to upload a file named &quot;<em>Parameters.json</em>&quot;, located on the local machine at &quot;<em>C:\Temp</em>&quot; directory.</p>
<pre><code>#Uploading File
$BlobName = "Parameters.json"
$localFile = "C:\Temp\" + $BlobName
$ContainerName  = "vhds"

#Note the Force switch will overwrite if the file already exists in the Azure container
Set-AzureStorageBlobContent -File $localFile -Container $ContainerName -Blob $BlobName -Context $Ctx -Force</code></pre>
<h3>3. Downloading the files using the context</h3>
<p>Download works in almost identical manner. You use the Get cmdlet instead of Set as shown below to download a file to a local folder, located at &quot;<em>C:\Downloads</em>&quot;.</p>
<pre><code>#Download File
$BlobName = "Parameters.json"
$localTargetDirectory = "C:\Downloads"
$ContainerName  = "vhds"

Get-AzureStorageBlobContent -Blob $BlobName -Container $ContainerName -Destination $localTargetDirectory -Context $ctx</code></pre>
<p>I hope this helps simplify the automated usage of Azure Storage container. Let us know your concerns or questions if any.</p>
<p>You can find the <strong>complete sample</strong> at the below link on GitHub. Right-click and select Save As to save the file: <a href="https://raw.githubusercontent.com/HarvestingClouds/PowerShellSamples/master/Scripts/StorageAccountBlobManagement.ps1">StorageAccountBlobManagement.ps1</a></p>
<p><strong>Reference:</strong> <a href="https://azure.microsoft.com/en-us/documentation/articles/storage-powershell-guide-full/" target="_blank">Using Azure PowerShell with Azure Storage</a></p>]]></description>
<link>https://HarvestingClouds.com/post/uploading-and-downloading-files-securely-from-azure-storage-blob-via-powershell</link>
<pubDate>Wed, 18 May 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure comes to Canada (along with Office 365)</title>
<description><![CDATA[<p>Last week marked the general availability of Azure datacenter for Canada locations. Also Office 365 has been released last week. Microsoft has set up 2 new datacenters in Canada. </p>
<h3>Where Exactly are these datacenters located</h3>
<ol>
<li><strong>Canada Central</strong> - The first datacenter is located in Toronto.</li>
<li><strong>Canada East</strong> - The second datacenter is located in Quebec City.</li>
</ol>
<p>Now when you are creating a new resource (like a Virtual Machine) you will see these two options.</p>
<p><img src="https://HarvestingClouds.com/images/1463686879573e16df4f64c.png" alt="New Locations" /></p>
<p>Check out the brief announcement video by <strong>Janet Kennedy</strong>, President of Microsoft Canada:</p>
<iframe src="https://channel9.msdn.com/Blogs/CANITPRO/The-Microsoft-Canada-Cloud-is-Open-for-Business/player" width="560" height="315" allowFullScreen frameBorder="0"></iframe>
<h3>Key Resources:</h3>
<ul>
<li>These locations are also listed in the official Microsoft Regions list here: <a href="https://azure.microsoft.com/en-us/regions/#services?WT.mc_id=azurebg_email_Trans_1106_Tier2_Release_MOSP" target="_blank">Azure Regions</a></li>
<li>Various resources and information for cloud in Canada are available here at <a href="https://www.microsoft.com/en-ca/sites/datacentre/default.aspx" target="_blank">Cloud Accelerate site for Canada</a>.</li>
<li>You can read about this announcement and upcoming features here: <a href="https://azure.microsoft.com/en-us/blog/microsoft-cloud-accelerates-in-canada-and-expands-to-south-korea/?WT.mc_id=azurebg_email_Trans_1106_Tier2_Release_MOSP" target="_blank">Microsoft Cloud accelerates in Canada and expands to South Korea</a></li>
</ul>]]></description>
<link>https://HarvestingClouds.com/post/azure-comes-to-canada-along-with-office-365</link>
<pubDate>Mon, 16 May 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>NEW Feature - Azure Cool Blob Storage</title>
<description><![CDATA[<p>Have you heard about the new <strong>Azure Cool Blob Storage</strong>? </p>
<p>If you haven’t heard about it, this is Microsoft's low-cost storage for <strong>Cool</strong> object data. “Example use cases for cool storage include backups, media content, scientific data, compliance and archival data. In general, any data which lives for a longer period of time and is accessed less than once a month is a perfect candidate for cool storage.” It is similar to what <strong>Glacier storage tier</strong> provides in Amazon Web Services.</p>
<ul>
<li><strong>Pricing:</strong> Its cost is as low as $0.01/GB.</li>
<li><strong>Availability:</strong> 99% (as compared to 99.9% for Hot Storage). With Read-access geo-redundant storage (or RA-GRS) the SLA is 99.9% (as compared to 99.99% for Hot).</li>
<li><strong>Deciding which AccessTier to use:</strong> If the objects in the storage account will be more frequently accessed, then go with <strong>Hot Tier</strong>. Select the <strong>Cold Tier</strong> for infrequently accessed data.</li>
</ul>
<p>Now when you go to New -&gt; &quot;Data + Storage&quot; -&gt; Storage Account, and try to create a Blob Storage account then you can select from one of the options for <strong>Access Tier</strong> from Cold or Hot tier. </p>
<p><img src="https://HarvestingClouds.com/images/146232375357294a2980ece.png" alt="Storage Tiers" /></p>
<p>Also, note that at the time of this writing, Blob storage account is <strong>only available in these locations</strong>: Central US, East US 2, North Central US, North Europe, West Europe, Southeast Asia, Japan East, Japan West, Central India, South India, West India.</p>
<p><strong>Resources to know more:</strong>  </p>
<ul>
<li><a href="https://azure.microsoft.com/en-us/blog/introducing-azure-cool-storage/" target="_blank">Official Announcement</a></li>
<li><a href="https://azure.microsoft.com/en-us/documentation/articles/storage-blob-storage-tiers/" target="_blank">Getting started guide</a></li>
</ul>]]></description>
<link>https://HarvestingClouds.com/post/new-feature-azure-cool-blob-storage</link>
<pubDate>Mon, 02 May 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Taking Automatic Remediation Action on Azure VM Alert Generation</title>
<description><![CDATA[<p>Using a new feature in Azure, now you can easily configure to trigger an Azure Automation Runbook when an Alert is triggered on an Azure Virtual Machine to take a remediation action. To leverage this feature all you need to do is link the alert on Azure VM to an already existing Azure Automation Runbook.</p>
<blockquote>
<p>Note: This feature is supported only for the V2 Virtual Machines, i.e. the VMs created using ARM portal.</p>
</blockquote>
<p>To access this feature open your Virtual Machine. Then go to the Manage alerts section in the Settings:</p>
<p><img src="https://HarvestingClouds.com/images/14618976705722c9c653752.png" alt="Setting - Manage alerts" /></p>
<p>Then open an existing alert or click on &quot;Add alert&quot; to create a new one. Specify the criteria for the alert. Scroll down to the bottom and you can view the new section to link the alert to an Automation Runbook.</p>
<p><img src="https://HarvestingClouds.com/images/14618990795722cf4763dce.png" alt="Automation Runbook for Alert" /></p>
<h3>Under the hood</h3>
<p>The alert will send data to your Runbook in a special format. Your Runbook should be expecting this. Under the hood this happens via WebHooks. The alert data is passed via a HTTP POST request. The Automation webhook service extracts the alert data from the POST request and passes it to the runbook in a parameter called <strong>&quot;WebhookData&quot;</strong>. The Runbook will look like below:</p>
<pre><code>[OutputType("PSAzureOperationResponse")]

param ( [object] $WebhookData )

if ($WebhookData)
{
    # Get the data object from WebhookData
    $WebhookBody = (ConvertFrom-Json -InputObject $WebhookData.RequestBody)

    #Rest of the script comes here
}</code></pre>
<p><strong>In Nutshell</strong>, now you can now trigger Azure Automation Runbooks to take remediation actions on Virtual Machines in case an alert is triggered. </p>
<p><strong>Reference with complete Runbook sample:</strong> <a href="https://azure.microsoft.com/en-us/documentation/articles/automation-azure-vm-alert-integration/">Azure Automation solution - remediate Azure VM alerts</a></p>]]></description>
<link>https://HarvestingClouds.com/post/taking-automatic-remediation-action-on-azure-vm-alert-generation</link>
<pubDate>Wed, 27 Apr 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>ROADMAP - Solutions to help with Migration from Azure ASM to ARM portal</title>
<description><![CDATA[<p>In additional to the tool I mentioned yesterday regarding <a href="https://HarvestingClouds.com/Migrating-from-Azure-ASM-to-ARM-portal">Migrating from Azure ASM to ARM portal</a> there are various solutions in the pipeline. This post looks at the high level Roadmap for the same from Microsoft.</p>
<p>Microsoft has promised that they are committed to make the migration more easier from ASM (older) to ARM (newer) portal. Various solutions are already in the pipeline for this.
Below are the details and roadmap for the tentative timelines for these solutions.</p>
<table border="1" cellpadding="0" cellspacing="0"> <tbody> <tr> <td valign="top" width="29%"> <p><b>Solution</b></p> </td> <td valign="top" width="51%"> <p><b>Customer Experience</b></p> </td> <td valign="top" width="18%"> <p><b>Expected availability in 2016</b></p> </td> </tr> <tr> <td valign="top" width="29%"> <p>Script migration</p> </td> <td valign="top" width="51%"> <p>VM is rebooted as it is recreated in the Resource Manager model. While the Virtual Machines for the environment are recreated, the network is disconnected.</p> </td> <td valign="top" width="18%"> <p align="center">Q1</p> </td> </tr> <tr> <td valign="top" width="29%"> <p>Virtual Machines, no VNET</p> </td> <td valign="top" width="51%"> <p>As all Virtual Machines deployed in the Resource Manager model must be in a VNet, Virtual Machines will be migrated and placed in a new VNET. This will result in a change in network configuration, requiring a reboot to reconnect.</p> </td> <td valign="top" width="18%"> <p align="center">Q2</p> </td> </tr> <tr> <td valign="top" width="29%"> <p>Virtual Machines with VNET</p> </td> <td valign="top" width="51%"> <p>Starting in Q2, the platform will offer Virtual Machine migration from ASM to Resource Manager model without disrupting the running Virtual Machine. This will require disconnecting any VNets connected on-premises, whether via ExpressRoute or VPN, before doing the migration.</p> </td> <td valign="top" width="18%"> <p align="center">Q2</p> </td> </tr> <tr> <td valign="top" width="29%"> <p>Virtual Machines with basic hybrid (one connection)</p> </td> <td valign="top" width="51%"> <p>Starting in Q3, the platform will offer Virtual Machine migration from ASM to Resource Manager model without disrupting the running Virtual Machine and with minimal disruption to a basic hybrid connection, limited to just one connection back on-premises. More complex connections will require disconnecting before doing the migration.</p> </td> <td valign="top" width="18%"> <p align="center">Q3</p> </td> </tr> </tbody> </table>
<p>Reference: <a href="https://azure.microsoft.com/en-us/blog/transitioning-to-the-resource-manager-model/">Transitioning to the Resource Manager model</a></p>]]></description>
<link>https://HarvestingClouds.com/post/roadmap-solutions-to-help-with-migration-from-azure-asm-to-arm-portal</link>
<pubDate>Fri, 22 Apr 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Migrating from Azure ASM to ARM portal</title>
<description><![CDATA[<p>With co-existing Azure Service Management or ASM portal (older) and Azure Resource Manager or ARM portal (newer) there has been lots of confusions and problems for IT administrators.
The bottom line of all the discussion around the two portals is that <strong>ARM is the future and is here to stay</strong>. It means that you need to <strong>plan and migrate</strong> your resources from ASM portal to the ARM portal.</p>
<p>The key resource is your infrastructure which primarily consists of virtual machines. To migrate a single Virtual Machine (VM) from ASM portal to ARM portal you can leverage a set of PowerShell scripts called ASM2ARM.
You can download these scripts and check their description on <a href="https://github.com/fullscale180/asm2arm">GitHub here on the <strong>ASM2ARM</strong> page</a>. You can check the detailed instructions there too.</p>
<p>To plan this right now is very important as the transitioning to Azure Resource Manager model is already underway. Any future development and investment seems to be happening only in the newer portal only.</p>
<p><strong>Reference:</strong> <a href="https://github.com/fullscale180/asm2arm">ASM2ARM scripts on GitHub</a></p>]]></description>
<link>https://HarvestingClouds.com/post/migrating-from-azure-asm-to-arm-portal</link>
<pubDate>Thu, 21 Apr 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>PowerShell DSC - Partial Configurations</title>
<description><![CDATA[<p><strong>Partial Configurations</strong> is a new feature in PowerShell 5.0 Desired State Configuration or DSC. It allows the configurations to be delivered in parts or fragments. These configurations can come from various sources.
The Local Configuration Manager or LCM on the target node puts these partial configurations from different sources together and after that apply the same as a single configuration.</p>
<p>This opens various possibilities for Enterprises to manage their infrastructure and designate the responsibility to various teams for a single node. The team expert in a particular field can focus on that feature without worrying about other features.</p>
<p>You can have partial configurations in following modes:</p>
<ol>
<li>Push Mode</li>
<li>Pull Mode</li>
<li>Hybrid Mode (i.e. combination of Push and Pull)</li>
</ol>
<h3>Configuration for the PUSH Mode</h3>
<p>You need to follow three steps to configure Partial configurations for the PUSH mode:</p>
<ul>
<li>Configure the LCM, on the target node, to expect partial configurations</li>
<li>Push each partial configuration from different sources using <strong>Publish-DSCConfiguration</strong> cmdlet. Target node will automatically combine the partial configurations into single configuration.</li>
<li>Apply the configuration by calling the <strong>Start-DSCConfiguration</strong>cmdlet</li>
</ul>
<h3>Configuration for the PULL Mode</h3>
<p>This is bit complex than the Push mode. In nutshell you only need couple of steps:</p>
<ul>
<li>Configure the LCM, on the target node, to receive partial configurations but from PULL servers</li>
<li>Name and locate the configuration documents properly on the pull servers</li>
</ul>
<p>To know more about DSC Partial configurations follow the below references:</p>
<ul>
<li><a href="https://automationnext.wordpress.com/2016/04/19/powershell-desired-state-configuration-partial-configurations-without-configurationid/">Detailed Blog by AutomationNext with very valuable insights</a></li>
<li><a href="https://msdn.microsoft.com/en-us/powershell/dsc/partialconfigs">Official MSDN Article</a></li>
</ul>]]></description>
<link>https://HarvestingClouds.com/post/powershell-dsc-partial-configurations</link>
<pubDate>Wed, 20 Apr 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Container Service hits General Availability</title>
<description><![CDATA[<p>Azure Container Service has finally hit General Availability today. </p>
<p>If you don't know already, it is the &quot;container hosting solution&quot; which is optimized for Microsoft's Azure cloud.
All the tools that you may be familiar with when working with a Container Service should work like Apache Mesos or Docker Swarm. It only uses open source components in the orchestration layers to give you portability of full applications.</p>
<p>You can find the announcement here: <a href="https://azure.microsoft.com/en-us/updates/general-availability-azure-container-service/">GA for Azure Container Service</a></p>
<p>You can learn more about the Container Service as offered by Azure on the product page here: <a href="https://azure.microsoft.com/en-us/services/container-service/">Azure Container Service</a></p>]]></description>
<link>https://HarvestingClouds.com/post/azure-container-service-hits-general-availability</link>
<pubDate>Tue, 19 Apr 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Azure Authentication - Authenticating any Azure API Request in your Application</title>
<description><![CDATA[<p>I have created a code sample to showcase how you can authenticate any request programatically with Azure.
This also contains <strong>a Reusable Authentication Helper class</strong> which you can directly use in your code.</p>
<h3>Where is the code</h3>
<p>You can find the complete code sample along with the reusable Azure Authentication Helper class library from this GitHub repo:
<a href="https://github.com/HarvestingClouds/AzureAuthentication">Azure Authentication Sample</a></p>
<h3>What are my authentication Options</h3>
<p>You have the following options</p>
<ul>
<li>Authenticating by <strong>Prompting</strong> for Credentials from end user. (This needs end user interaction)</li>
<li>Authenticating by <strong>Credentials</strong> i.e. using a password. (This does not need any end user interaction)</li>
<li>Authenticating by using a <strong>Certificate</strong> ( This also does not need any end user interaction)</li>
</ul>
<p>I have provided this functionality in 3 separate methods, in a separate class file along with it's interface.
You can follow the instructions in the ReadMe file in the GitHub repo and start using any one of the method.</p>
<p>I hope you find this usefull and this will avoid the trouble of figuring things out, which I have already undergone. </p>
<p>Let me know in the comments below if you have any questions or anything to add to this.</p>]]></description>
<link>https://HarvestingClouds.com/post/azure-authentication-authenticating-any-azure-api-request-in-your-application</link>
<pubDate>Fri, 15 Apr 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Run Azure Automation Runbooks via PowerShell ISE</title>
<description><![CDATA[<p>Today I came across this blog post from my friend: <a href="https://scomanswers.wordpress.com/2016/04/11/azure-automation-powershell-ise-add-on/">Azure Automation PowerShell ISE add-on</a></p>
<p>What I came to know is that now you can Run the Azure Automation Runbooks via PowerShell ISE. This solves a big pain point for all Azure developers.
Now you will be able to develop and test your scripts right from the convenience of your laptop's local PowerShell ISE. </p>
<h3>What you need to do</h3>
<p>All you need to do is install the PowerShell Add-On using the below cmdlet:</p>
<pre><code class="language-powershell">Find-Module AzureAutomationAuthoringToolkit | Install-Module -Scope CurrentUser</code></pre>
<p>Then import the module using below cmdlet:</p>
<pre><code class="language-powershell">Import-Module AzureAutomationAuthoringToolkit</code></pre>
<p>You can configure the Add-On using a Configuration tab in the add-on and start getting your hands dirty. </p>
<h3>Official Information from the Add-On Help</h3>
<h4>Capabilities</h4>
<ul>
<li>Test runbooks on your local machine and in the Azure Automation service: </li>
<li>Store and edit Automation Assets locally </li>
<li>Use Automation Activities (Get-AutomationVariable, Get-AutomationPSCredential, etc) in local PowerShell scripts </li>
<li>Sync changes back to your Automation Account </li>
<li>Run test jobs in Automation and view results </li>
</ul>
<h4>Notes</h4>
<p>Assets</p>
<ul>
<li>Secret values (passwords, encrypted variables) are not downloaded automatically; they need to be set manually the first time the account is synced </li>
<li>Values that haven't been downloaded will be highlighted </li>
<li>Asset values you enter locally will not get overwritten when you sync from the cloud </li>
</ul>
<p>Runbooks </p>
<ul>
<li>Native PowerShell and PowerShell Workflow runbooks are supported </li>
</ul>
<p>Check the screenshot regarding this information below:
<img src="https://HarvestingClouds.com/images/1461735632572050d069253.png" alt="Official Notes" title="Official Notes" /></p>
<h3>How much time it would take me</h3>
<p>In all it would take you under 10 mins to get setup and rolling.</p>
<h3>Where is more information on this and screenshots</h3>
<p>Go to the official <a href="https://blogs.technet.microsoft.com/msoms/2016/04/08/the-way-cool-azure-automation-powershell-ise-add-on/">Technet blog by clicking HERE.</a></p>
<p>Start playing around and let us know your initial impression in the comments below. If you have any doubts and I will be happy to address them.</p>]]></description>
<link>https://HarvestingClouds.com/post/run-azure-automation-runbooks-via-powershell-ise</link>
<pubDate>Thu, 14 Apr 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Various Options Added to Buy Microsoft Azure Active Directory Basic</title>
<description><![CDATA[<p>Today Microsoft announced that they have added various options to buy Microsoft Azure Active Directory (AAD) Basic.
You can now buy it through the Direct program as well as through following options:</p>
<ul>
<li><a href="https://www.microsoft.com/en-us/licensing/licensing-programs/enterprise.aspx?WT.mc_id=azurebg_email_Trans_1065_Tier2_Release_MOSP">Microsoft Enterprise Agreement</a></li>
<li><a href="https://www.microsoft.com/en-us/licensing/licensing-programs/open-license.aspx?WT.mc_id=azurebg_email_Trans_1065_Tier2_Release_MOSP">Open Volume License Program</a></li>
<li><a href="https://partner.microsoft.com/en-US/Solutions/cloud-reseller-overview?WT.mc_id=azurebg_email_Trans_1065_Tier2_Release_MOSP">Microsoft Cloud Solution Provider</a></li>
</ul>
<p>To purchase, sign in to the <a href="https://portal.office.com">Office 365 Administration Portal</a></p>
<p>You can also watch the below video for details. Although the video is for AAD Premium, the steps are essentially similar for AAD Basic.</p>
<iframe src="https://channel9.msdn.com/Series/Azure-Active-Directory-Videos-Demos/How-to-Purchase-Azure-Active-Directory-Premium-Existing-Customer/player" width="560" height="315" allowFullScreen frameBorder="0"></iframe>
<p>You can also engage a partner to assist you with the purchase and your Azure Active Directory related any requirements.
<a href="http://www.infrontconsulting.com/">Infront Consulting Group</a> (where I currently work) is one such partner who are highly respected in market and are Microsoft Gold Certified Partner. </p>
<p>Thanks for reading! If you have any questions please ask in the comments below.</p>]]></description>
<link>https://HarvestingClouds.com/post/various-options-added-to-buy-microsoft-azure-active-directory-basic</link>
<pubDate>Wed, 13 Apr 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Multiple Values In Grid.Mvc Single Column Filter via Checkboxes with Code Sample</title>
<description><![CDATA[<p>I have been struggling to implement multiple filters in a single column in Grid.Mvc tool. I have solved this by altering the code and updating the custom widget.
<strong>Note:</strong> The WithMultipleFilters() option will not help you in this. That option enables multiple filters on different columns. To have multiple filters in the same column you need to update the way filtering works in the tool itself.</p>
<p>I have used a list of checkboxes and any or all of the elements selected in this checkbox list will be used for filtering the column values.</p>
<p>You can find the code in my fork of the official Grid.Mvc repo at below link:
<a href="https://github.com/HarvestingClouds/Grid.Mvc" target="_blank">Fork of Grid.Mvc repo with Advance Filters</a></p>
<p>I have also created a pull request for the same so that more people get benefit from this if they refer the master branch of the main repo.</p>
<h3>What are the changes I have done?</h3>
<p>I have made changes to two files:</p>
<ol>
<li><strong>DefaultColumnFilter.cs</strong> file in &quot;<strong>GridMvc</strong>&quot; class library project under the Filters folder. I have updated the GetFilterExpression method to create multiple expressions based on the pipeline character in filter values.</li>
<li><strong>gridmvc.customwidgets.js</strong> file in &quot;<strong>GridMvc.Site</strong>&quot; web application project</li>
</ol>
<p>Both of these paths are shown below:
Location of DefaultColumnFilter.cs:
<img src="https://HarvestingClouds.com/images/146173541957204ffb99678.png" alt="DefaultColumnFilter.cs" title="DefaultColumnFilter.cs" /></p>
<p>Location of gridmvc.customwidgets.js:
<img src="https://HarvestingClouds.com/images/146173543057205006529e6.png" alt="gridmvc.customwidgets.js" title="gridmvc.customwidgets.js" /></p>
<p>How the end result look like:
<img src="https://HarvestingClouds.com/images/14617354255720500171fe7.png" alt="Checkbox Filtering" title="Checkbox Filtering" /></p>
<p>You can directly use the code if you want. Just honor the license of the original author.</p>
<p>Let me know in the comments below if you have any doubts and I will be happy to address them.</p>]]></description>
<link>https://HarvestingClouds.com/post/multiple-values-in-gridmvc-single-column-filter-via-checkboxes-with-code-sample</link>
<pubDate>Tue, 12 Apr 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Getting Started - Azure Site Recovery (ASR) In New Azure Portal</title>
<description><![CDATA[<p>Azure Site Recovery or ASR is now available in the new Azure Resource Manager or ARM portal (codename Ibiza) with modern user interface. It is in preview at this stage. But it is production ready for all the Hyper-V related scenarios.
<strong>Your older Vaults (created via Classic ASM Azure Portal) will not be available in ASR preview feature.</strong></p>
<h3>What are the new features</h3>
<p>The new features include:</p>
<ul>
<li>All the goodness of Azure Resource Manager in ASR</li>
<li>Lean experience for various ASR scenarios</li>
<li>Enhancements to the specific Site Recovery scenarios</li>
</ul>
<h3>Lets take a quick look at some of these.</h3>
<p>If you Browse and search for &quot;Recovery&quot; you get Recovery Services Vaults as Preview feature.
<img src="https://HarvestingClouds.com/images/14617358075720517f78904.png" alt="Browse and Search" title="Browse and Search" /></p>
<p>Clicking on it will open up the blade for &quot;Recovery Services valuts&quot;. Notice that Microsoft has PREVIEW text in this.
<img src="https://HarvestingClouds.com/images/1461735702572051161683f.png" alt="alt text" title="ASR Vault" /></p>
<p>Clicking on the Add button brings up the ASR vault creation blade. Notice the locations available for vault creation here.
<img src="https://HarvestingClouds.com/images/14617359385720520210878.png" alt="Vault Creation" title="Vault Creation" /></p>
<p>After you hit create the Vault gets deployed really quickly. I tested for East US location and it was created in under 10 secs.
Refresh to view your newly created vault. Click on it to open the NEW ASR Vault features. Notice that the Backup feature is also there in the ASR vault now.
<img src="https://HarvestingClouds.com/images/1461735907572051e31549a.png" alt="New Vault" title="New Vault" /></p>
<p>To find the options for replication go to Settings -&gt; Getting Started section -&gt; Site Recovery -&gt; Follow Wizard.
<img src="https://HarvestingClouds.com/images/1461735869572051bd2d6fa.png" alt="New Site Recovery Wizard" title="New Site Recovery Wizard" /></p>
<p>The Scenario Types available are only two. But all the scenarios are covered here:</p>
<ul>
<li>From my site to Azure</li>
<li>From my site to another site</li>
</ul>
<p>Based on the scenario you select you are asked for different options. The options for Virtualization/Management Server type for &quot;From my site to Azure&quot; are:</p>
<ul>
<li>VMM</li>
<li>Stand alone Hyper-V hosts</li>
<li>vCenter</li>
<li>Physical machines (not virtualized)
<img src="https://HarvestingClouds.com/images/14617358385720519e578ba.png" alt="Creation Options" title="Creation Options" /></li>
</ul>
<h3>Backup in ASR vault</h3>
<p>Another feature is creation of Backups from the same vault. Click on the + icon for Backup in the Vault main blade and then follow the wizard for the preview feature.
<img src="https://HarvestingClouds.com/images/14617357575720514d5d1d8.png" alt="Backup In ASR" title="Backup In ASR" />
Notice in the screenshot above that the backup types available are:</p>
<ul>
<li>Azure virtual machine backup</li>
<li>File Folder backup</li>
<li>System Center Data Protection Manager</li>
</ul>
<p>Selecting each option provides you with details for next steps. You can then create a backup policy and configure Items to backup.</p>
<p>Give these features a try and let us know in comments below how you find the new features.
Happy Exploring!</p>]]></description>
<link>https://HarvestingClouds.com/post/getting-started-azure-site-recovery-asr-in-new-azure-portal</link>
<pubDate>Sat, 09 Apr 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Coming Soon - Windows 10 Anniversary Update</title>
<description><![CDATA[<p>Windows 10 Anniversary Update is coming this summary. It will be available for free download for the following devices (which is almost every device):</p>
<ul>
<li>PCs</li>
<li>Tablets</li>
<li>Phones</li>
<li>Xbox One</li>
<li>Microsoft HoloLens</li>
<li>IoT</li>
</ul>
<h2>What this means to you:</h2>
<ul>
<li>Improved Biometric Security</li>
<li>Microsoft Edge browser</li>
<li>Windows Ink (where just one click of pen will bring up all the gamut available for use with your Pen device)</li>
<li>Universal Windows Platform or UWP apps are coming to XBox through a Unified Windows Store. Also if you own a XBox you will be able to turn it into a dev box and do development with it</li>
<li>Various improvements to Cortana</li>
</ul>
<p><a href="https://www.microsoft.com/en-us/windows/upcoming-features" target="_blank">Check out more details here</a></p>]]></description>
<link>https://HarvestingClouds.com/post/coming-soon-windows-10-anniversary-update</link>
<pubDate>Fri, 08 Apr 2016 00:00:00 +0500</pubDate>
</item>
<item>
<title>Introducing Harvesting Clouds</title>
<description><![CDATA[<p>Harvesting Clouds is a blog about all things Cloud. Be it Private Cloud or Public Cloud, I will try to cover various aspects of both.</p>
<h3>Private Cloud</h3>
<p>My key areas of interest in Private Cloud include the following:</p>
<ul>
<li>PowerShell Scripting</li>
<li>Windows Azure Pack or WAP</li>
<li>Service Management Automation or SMA</li>
<li>Azure Stack</li>
<li>System Center Orchestrator</li>
<li>System Center VMM and other products like Service Manager, Ops Mgr, etc.</li>
</ul>
<h3>Public Cloud</h3>
<p>In addition to the Private Cloud the areas of interest in Public Cloud are:</p>
<ul>
<li>Microsoft Azure and Amazon Web Services - both IaaS and PaaS</li>
<li>Azure Automation</li>
<li>Desired State Configurations</li>
<li>Application Insights</li>
<li>Azure Web Apps</li>
<li>Web APIs</li>
<li>Azure Site Recovery and Backup</li>
<li>Migrations from Private to Public Clouds</li>
</ul>
<h3>Common Areas &amp; Best of both worlds</h3>
<p>I have also been involved in creating Hybrid clouds leveraging the best of both worlds. I will try to share my knowledge on this with you. The key aspects in this area are:</p>
<ul>
<li>Building Hybrid Solutions</li>
<li>Developing Web or Desktop Applications targetting either or both the clouds (using MVC, Dot Net)</li>
<li>Using TFS Online, Visual Studio, GitHub to better collaborate and work in an automated fashion</li>
<li>Release Manager to automate your release workflows</li>
</ul>
<h3>Primary Focus</h3>
<p>As you must have guessed by now, the primary focus for this blog will be Microsoft Technologies. We will also explore beyond this and will be talking about various emerging open source technologies and the new Better Together world with the amalgamation of various technologies in one solution.</p>
<p>I invite to take this journey with me!
Keep learning!</p>]]></description>
<link>https://HarvestingClouds.com/post/introducing-harvesting-clouds</link>
<pubDate>Fri, 01 Apr 2016 00:00:00 +0500</pubDate>
</item>
</channel>
</rss>
