Terraform Enterprise config with Azure DevOps

I have been spending quite a bit of time coming up to speed on Terraform. Terraform Enterprise is a SaaS application that helps teams use Terraform together.  It manages Terraform runs in a consistent and reliable environment, and includes easy access to shared state and secret data, access controls for approving changes to infrastructure, a private registry for sharing Terraform modules, detailed policy controls for governing the contents of Terraform configurations, and more.

When using Terraform Enterprise, you must create a terraformrc (Linux/MacOS) or terraform.rc (Windows) which contains the credential token for “app.terraform.io”.  This is a token you create within the Terraform Enterprise setup.

terraformrc

Inside of your Terraform configuration file, you need to add the terraform section to indicate you want execution and state to happen within the Terraform Enterprise

terraform-remote-config

If you are using Azure DevOps pipelines to perform your Terraform tasks, you will need a way to have this file present when the pipelines kickoff but we don’t want to store something like this in source control.  There are a couple of different ways you can do this but I prefer the following approach:

Note: I am assuming you have already configured a pipeline to package/push your Terraform configuration files to the artifacts location.

  1. Azure DevOps has a feature called Secure Files that allows you to upload files that are encrypted as rest.  They cannot been seen or modified once they are uploaded.  However, they can be made available to your pipelines.  Create your terraform.rc file as indicated above and upload into Azure DevOps Secure Files Library
    terraform-secure-files
  2. Inside your pipeline, Add the Download Secure File task from the Market Place and add a task to your pipeline.  Be sure to the set the “Reference Name” in the classic editor or the “name” property in the YAML version so that we can reference this file path later
    terraform-download-secure-file
  3. Add a Power Shell task to your pipeline that will move the secure file to pipeline agent working location so that Terraform can find it using the “Reference Name” variable we set above
    terraform-powershell
  4. Add the addition Terraform tasks with simple init, plan, or apply and Terraform will be able to leverage your Enterprise account

Never stop learning and keep moving forward!

Terraform Enterprise Remote State with CLI

I have been spending quite a bit of time coming up to speed on Terraform. Terraform Enterprise is a SaaS application that helps teams use Terraform together.  It manages Terraform runs in a consistent and reliable environment, and includes easy access to shared state and secret data, access controls for approving changes to infrastructure, a private registry for sharing Terraform modules, detailed policy controls for governing the contents of Terraform configurations, and more.

Terraform CLI typically keeps track of state in a local file called terraform.tfstate.  Managing those state files across teams can be challenging if you have multiple team members working on the same configuration file.  One benefit of using Terraform Enterprise is that you can manage state in a remote store.  There is a great tutorial on Terraform’s site for Getting Started with Terraform Enterprise.  One thing to keep in mind, is that if you are going simply use Terraform Enterprise to manage teams, store state and backups and continue to execute your configuration files from the CLI, there is one important step you need to make sure you take.

After you set up your organization and create a workspace, be sure to go to the workspace Setting -> General Settings and set the Execution Mode to Local.  This will allow you to continue executing your “terraform plan …” and “terraform apply…” commands from the CLI.  Otherwise, the execution of the plan will happen on Terraform’s hosted environment.  You may run into issues or errors when executing your plans from the command line like the following:

Error: Error building AzureRM Client: Azure CLI Authorization Profile was not found. Please ensure the Azure CLI is installed and then log-in with `az login`.

The reason this error is being thrown in this situation, is because the Execution Mode is set to Remote and therefore is trying to execute on the Terraform hosted environment and it does not have the service principal environment variables set.

terraform settings local

Angular 4 Reverse Proxy on Azure

When you are deploying a modern SPA (Single Page Application) in Azure where your API’s are using a different host than your SPA application, you run into an issue with CORS (cross-origin resource sharing).  Simply put, your browser that is rendering your application from one host cannot make an AJAX request to an API on a different host for security reasons.  When you are deploying to IIS server on premises, a good solution is to use the URL Rewrite and Application Request Routing features built into IIS.

The same kind of configuration can be achieved with a web site hosted in Azure Web Sites.  Any site hosted in Azure Web Sites has URL Rewrite and ARR enabled.  However the proxy functionality is disabled by default in ARR. To enable that we will use the Azure Site Extension XDT transform which will modify the applicationHost.config file for our site and will enable proxy features.

Start by creating a file named applicationHost.xdt with the following XML and upload it to the root of your web site.

<?xml version="1.0"?>
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
    <system.webServer>
        <proxy xdt:Transform="InsertIfMissing" enabled="true" preserveHostHeader="false"           reverseRewriteHostInResponseHeaders="false" />
    </system.webServer>
</configuration>

Next we will modify our web.config file with some routing rules:

 <rewrite>
  <rules>
    <rule name="ProxyRule" stopProcessing="true">
      <match url="^api/(.*)" />
      <action type="Rewrite" url="https://[your api host]/{R:0}" />
    </rule>
  </rules>
</rewrite>

So now within your SPA application, any request made to https:[your spa host]/api/… will now be re-routed to the host configured in the rewrite rule. This rule that you have configured is the same rule you would use if you were deploying in IIS on premises with ARR.

If your site also uses ASP.NET MVC. When MVC is enabled on the web site used as a proxy then the MVC Router intercepts all the requests so they are not processed by ARR. So instead of request being forwarded to a destination server you get HTTP 404 – File Not Found error.

In order to fix this you’ll need to exclude the proxy route from the MVC routes by adding the following code to you MVC application:

routes.IgnoreRoute("api/{*pathInfo}");

Publish to Private NuGet Repo from VSTS

NuGet is an outstanding dependency package management platform. The integration within Microsoft Visual Studio makes managing these packages extremely simple. One of the things I like the most about NuGet is that I don’t have to commit all of my dependencies into a source control repository. I have started using a private company hosted NuGet repository sitting behind an SSL endpoint with Active Directory security so we can securely manage our own packages for internal and client projects. In the beginning we were also hosting our own Team Foundation Build servers on our own premises.  This worked really well and didn’t provide any issue with security because our NuGet server and build server was on the same domain.  However, recently we started moving to Microsoft Visual Studio Team Services (VSTS) in the cloud.  The problem arises now because our NuGet server is still hosted internally on our domain but VSTS and it’s build services are now in the cloud and don’t have direct access to our NuGet server.

Solution:
NOTE: Your NuGet server must be exposed externally outside your firewall.

Step 1: Inside of your build process within VSTS, you need to add a NuGet task to add a new NuGet source.  Set the “Display name” to something that represents adding a new NuGet source.  Set the “Command” to “custom”.  Add the following command in the “Command and arguments” text box:
source add -name “” -source https://%5BNuGet Server Endpoint]/nuget -username “[domain]\[username]” -password [password]

build6

Step 2: Inside of your build process within VSTS, you need to add a NuGet task to create the package you are going to upload to your repository.   Set the “Command” to “pack” and select the “Path to csproj or nuspec file(s) to pack” to the path within your repository you have saved your .nuspec file.  If you aren’t familiar with creating a nuspec file see the Creating NuGet packages.

build7

 

Step 3: Inside of your build process within VSTS, you need to add a NuGet task to push the package to your private repository.  Set the “Command” to “push”.  Select “External NuGet server (including other accounts/collections)” for the Target feed location.

build5

Once you have the task created, you need to configure an external “Endpoint”.  Inside the task you just created, click on “Manage” next to NuGet server.  This will launch a new page that allows you to configure a new external site connection.  Click the “+ New Service Endpoint” button.  This will launch a dialog for you to enter the information for your private repository.  Add the same endpoint URL that you added to the source command in Step 1 and add the custom API key that is configured on your NuGet server.

build3

Then back on your build step, select that new endpoint from the drop down.  That’s it and you should be able to now package and publish to your private NuGet repository assuming it’s exposed externally.

Windows 10 Mobile 10.0.10586.63 No Cellular Data (AT&T)

I like to think of myself as pretty open minded when it comes to technology. While I do spend 70% of my time on the Microsoft stack in the .NET realm, I do spend a lot of time in other areas as well. I have both a MacBook Pro and a Surface and I do quite a bit of mobile development on iOS with Obj-C and Swift, little Android, and some Windows Mobile. I have owned all of the major phone platforms at one point or another but most recently went back to a Windows Phone. I know what your thinking, why would I do that. I really like the idea of a third player out there to round things out and frankly I was getting a little bored with the iPhone after four years of pretty much the same interface and functionality. So I got me a Windows Phone and jumped right in with the Windows Insider Program.  I have actually really been enjoying the Windows 10 Mobile platform and I am a big fan of Windows 10 in general.  Things have been great until this weekend when 10.0.10586.63 got rolled out to the insiders.  My first thought was, “excellent, can’t wait to see all of the enhancements”.  The install went fine and things were looking good until I left my home for the first time only to realize I no longer had cellular data connection and couldn’t send MMS messages.  Every application said “Sorry, no network connectivity”.  I was enraged because yesterday things were close to perfect before the upgrade.  Well, after threating to go back to my iPhone, calming down a little and remembering I did sign up for a beta program on my main device, I did some research. I was able to get the cellular data working again.  The “Force” has been restored.  Here are the steps and settings I used to get my data back on.

Settings -> Network & Wireless -> Cellular & SIM

  • Make sure Cellular data in turned on
  • Click on “SIM Settings”
  • Add an Internet APN (making sure to save)
    wp_ss_20160130_0001 wp_ss_20160130_0002
  • Add an MMS APN (making sure to save)
    wp_ss_20160130_0003 wp_ss_20160130_0004

Hope this helps a few of you out and keeps you loving the Windows 10 Mobile OS.

 

403 forbidden when calling Azure Service Management REST api from a worker role instance

So a couple of weeks ago I blogged about how to make use of the Service Bus Entity Metrics REST APIs and some of the things I learned. The next step of the puzzle was to create a worker role that would monitor some of my service bus queues using this API and logs the results.  A team member and I started down the path of creating this worker and thought things were going great.  We installed the certificate on our local machine that we uploaded to the Azure portal and ran the worker locally thru the emulator and all worked well.  Unfortunately, that’s were the celebration stopped.  Once we deployed the worker to a cloud service instance, we kept getting a 403 forbidden exception when the worker tried to call the management API.  After days of fighting this, we finally had success.  Here are the steps we took to make this successful.

  • First things first you need to create a self-signed certificate.  This is the really specific piece.  There are lots of articles on doing this but they all seemed to lack one specific detail.  Credit for these steps clearly go to Jeff and his post on stack overflow.  Run these two commands to create your certificate files.
    makecert -r -pe -n "CN=[name of certificate]" -sky exchange "[path to certificate].cer" -sv "[path to certificate].pvk"
    pvk2pfx -pvk "[path to certificate].pvk" -spc "[path to certificate].cer" -pfx "[path to certificate].pfx" -pi [password]
    
  • Upload the .cer file to the Azure Portal. This is at the subscription level
    azure portal certificate
  • Upload the .pfx file to the cloud service instance hosting your worker role
    cloud service certificate
  • Inside Visual Studio, add the certificate to the Worker Role properties
    worker properties
  • We actually wrote a small custom web client to abstract some of the details.
        public class AzureManagementWebClient : WebClient
        {
            string _thumbprint = CloudConfig.Get("ServiceBusAPICertificateThumb");
    
            protected override WebRequest GetWebRequest(Uri address)
            {
                var request = base.GetWebRequest(address);
    
                //The certificate must be installed on your local machine and configured to deploy with worker
                var certs = this.GetCertificate(StoreLocation.CurrentUser);
                if (certs.Count == 0)
                {
                    certs = this.GetCertificate(StoreLocation.LocalMachine);
                }
                if (certs.Count == 0)
                {
                    throw new ArgumentNullException(string.Format("Certificate: {0}", _thumbprint));
                }
    
    
                (request as HttpWebRequest).ClientCertificates.Add(certs[0]);
                (request as HttpWebRequest).Headers.Add("x-ms-version: 2013-10-01");
                (request as HttpWebRequest).Accept = "application/json";
    
                return request;
            }
    
            private X509Certificate2Collection GetCertificate(StoreLocation location)
            {
                var store = new X509Store(StoreName.My, location);
                store.Open(OpenFlags.ReadOnly | OpenFlags.OpenExistingOnly);
                var collection = store.Certificates.Find(X509FindType.FindByThumbprint, _thumbprint, false);
                store.Close();
                return collection;
            }
    
        }
    
    
  • In order to run this locally with the emulator, you will need to import the pfx into your Local Machine / Personal certificate store 

Azure Service Bus Entity Metrics API

The other day I was presented with a project for a client wanting to add some monitoring and alerting to some of their Azure resources. Azure has some metrics and alerting functionality on most of their resources such as CPU, DTU, Memory, etc. Click here to see information related to creating these alerts. One challenge I was faced with was monitoring metrics around service bus queues.  Unfortunately, Microsoft has not provided the same monitoring and alerting functionality for service bus queues through the Azure portal.  After some research, I uncovered a REST API for retrieving metrics for service bus queues.  Problem solved, right? Not!  I spent a couple of days trying to figure out this loosely documented API.  After many searches through the web, it appears everyone is having the same problem.  The API is there but apparently no one can figure out how to use it.  Someone uncovered yet another API exposed which is completely different and I was able to get it to work.  In this link someone mentions they figured out they could use the following:

https://management.core.windows.net/%5Bsubscriberid%5D/services/monitoring/metricvalues/query?resourceId=/servicebus/namespaces/%5Bnamespace%5D/Queues/%5Bqueue%5D&amp;amp;amp;amp;amp;names=size,incoming,outgoing,length,requests.total,requests.successful,requests.failed,requests.failed.internalservererror,requests.failed.serverbusy,requests.failed.other&amp;amp;amp;amp;amp;timeGrain=PT5M&amp;amp;amp;amp;amp;startTime=2015-11-16T08:59:27.7243529Z&amp;amp;amp;amp;amp;endTime=2015-11-16T17:19:27.7243529Z

While this did work, I was not satisfied not being able to use the API that is documented above.  After more playing around, the magic sauce was finally discovered around how this API works. The following implementation worked successfully:

https://management.core.windows.net/%5Bsubscriberid%5D/services/servicebus/namespaces/%5Bnamespace%5D/queues/%5Bqueue%5D/metrics/incoming/rollups/PT1H/values?$filter=Timestamp%20gt%20datetime'2015-11-16T12:00:00.0000000Z'

There are a few very specific notes when using this API.

  • As mentioned in the Microsoft article, you will have to create a certificate and upload to the Azure management portal and then use this certificate to call the API.
  • You must send this header with the request:   x-ms-version: 2013-10-01
  • The API returns XML by default.  If you want to return JSON instead, send this header with the request:   Accept: application/json
  • If you omit the /values?$filter=Timestamp%20gt%20datetime’2015-11-16T12:00:00.0000000Z’ from your request, you will simply get metrics back but all of the values will return null.
  • The format of the filter is very specific. The “Timestamp” property is case sensitive.  If you pass it lower case you will receive a HTTP 500 error.
  • The Rollup options PT5M, PT1H, P1D, P7D must be all upper case

I created a custom WebClient that encapsulates the details of the headers and certificates.

class AzureManagementClient : WebClient
{
    protected override WebRequest GetWebRequest(Uri address)
    {
        var request = (HttpWebRequest)base.GetWebRequest(address);

        X509Store store = new X509Store("My", StoreLocation.CurrentUser);
        store.Open(OpenFlags.ReadOnly);
        X509Certificate2Collection certificates = store.Certificates.Find(X509FindType.FindBySubjectName, "[certificate name]", false);
        var cert = certificates[0];

        request.ClientCertificates.Add(cert);
        request.Headers.Add("x-ms-version: 2013-10-01");
        request.Accept = "application/json";
        
        return request;
    }
}

blog_spacer

Kendo UI Progress Indicator with Large Grid Disappears

Recently I have been working on a rather large application for a re-insurance company and they often have extremely large complex grids of information they want to display and interact with.  There are times where we have to manipulate the data after it’s returned from the ajax call and before it’s rendered to the user.  The issue here is that the native progress indicator that displays when a grid is loaded is actually only displayed when the call is initiated but disappears after the data call is returned, not when the data is rendered.  Depending on the amount of data you are returning and how much manipulation you have to perform, there could be quite a delay between the time the progress indicator disappears and when the grid is actually rendered leaving the user to think something is wrong.  We are using the KendoUI MVC controls in Razor.  The solution I came up with was to hook into the RequestStart event on the datasource to turn the progress indicator on and the DataBound event on the grid to turn the indicator off.  This will now keep the progress indicator displayed until the DataBound is called which is essentially the same as when the grid is rendered.

Razor Code:

@(Html.Kendo().Grid()
   .Name("k-grid-payLogs")
   .DataSource(dataSource =&gt; dataSource
      .Ajax()
      .Aggregates(aggregates =&gt;
      {
         aggregates.Add(p =&gt; p.AmountBilled).Sum();
         aggregates.Add(p =&gt; p.BillReviewFees).Sum();
         aggregates.Add(p =&gt; p.RepricedPaidAmount).Sum();
         aggregates.Add(p =&gt; p.TotalMccaPaymentRequest).Sum();
         aggregates.Add(p =&gt; p.ExpAdjustmentAmount).Sum();
         aggregates.Add(p =&gt; p.LossAdjustmentAmount).Sum();
         aggregates.Add(p =&gt; p.AdjustedPaymentRequestAmount).Sum();
       })
       .Read(read =&gt; read.Action("getDataJsonAjax", "reimbursementrequest")
       .Data("pageModule.getPayLogCriteria"))
       .PageSize(100)
       .Events(e =&gt; e.RequestStart("pageModule.onRequestStart"))
    )
   .Pageable(pageable =&gt; pageable.Refresh(true)
   .PageSizes(new[] { 10, 20, 50, 100 })
   .ButtonCount(5))
   .Sortable()
   .Filterable()
   .ColumnMenu()
   .Reorderable(reorder =&gt; reorder.Columns(true))
   .Resizable(resize =&gt; resize.Columns(true))
   .Events(e =&gt; e.DataBound("pageModule.onDataBound"));

Javascript:

this.onRequestStart = function (e) {
    kendo.ui.progress($("#k-grid-payLogs"), true);
};

this.onDataBound = function (e) {
    kendo.ui.progress($("#k-grid-payLogs"), false);
});

blog_spacer

GiveCamp

In the spirit of giving back, one of my favorite activities throughout the year is giving back to my community at GiveCamp.  Sometimes as technologists, it’s often difficult to find ways for us to give back.  What better way then to donate some of your mad technical skills and time to an amazing charity providing help with their projects like web sites, mobile applications, databases, etc.  These non-profit groups most likely don’t have funds to hire consultants and rely on folks like you and I to help them further their cause.  If you have not participated in one of these events before, I highly recommend it.

blog_spacer