Use Azure to Provide DRP for Your VMs

The Azure team has just launched Hyper-V Recovery Manager! 

Windows Azure Hyper-V Recovery Manager helps protect your on premise applications and services by orchestrating the protection and recovery of Virtual Machines running in a System Center Virtual Machine Manager 2012 R2 and System Center Virtual Machine Manager 2012 SP1 private cloud to a secondary location. With simplified configuration, automated protection, continuous health monitoring and orchestrated recovery, Hyper-V Recovery Manager service can help you implement Disaster Recovery and recover applications accurately, consistently, and with minimal downtime.

Using System Centre Virtual Machine Manager, you can replicate your on premise VMs to a secondary location on Azure and receive a simple DRP site for your VMs.  In the event your VMs fail locally, you can recover your applications from Azure through the replication service.

image

 

Pricing is per Virtual Machine you want to protect at $16 / month per VM.  In addition, if you want technical support it starts at $29 / month.  The uptime guarantee is 99.9%.

One Key Challenge: The “Enterprise SLA”

Microsoft states that this service is backed by an “Enterprise SLA” – you can download the SLA here.

If you read the SLA, the interesting piece is that the ONLY penalty for uptime failure are refunds of service credits.  There are no refunds for loss of data, impact to your business, etc.  The penalty is described here in terms of your discount for that month if Microsoft fails to live up to its uptime commitments:

Monthly Uptime Percentage

Service Credit*

<99.9%

10%

<99%

25%

So if your DRP solution isn’t available for 7 hours in a month, you receive a measly credit of $4 per VM. 

Read More

New Features for Azure including Staging Support for Web Sites!

The Microsoft team has just released an update to Azure and it includes the following new features:

  • Web Sites: Staged Publishing Support and Always On Support
  • Monitoring Improvements: Web Sites + SQL Database Alerts
  • Hyper-V Recovery Manager: General Availability Release
  • Mobile Services: Support for SenchaTouch
  • PCI Compliance: Windows Azure Now Validated for PCI DSS Compliance

One of the more interesting ones is the Staging support for your web site.  Using this feature, you can take any web site running on Azure and clone it to a Staging environment. 

Using Staged Publishing, you can create a new staging version of your site, make changes to it, and the swap the staging version into production.  The original production site is swapped back into staging to allow for a roll-back if your deployment doesn’t work out as expected.

One important note: this feature is only available on Standard Mode, not Shared or Free Mode. 

Read More

Comparison of a Commercial Tool with My Custom Azure Table Storage Importer

In reviewing my blog post on creating a custom Azure Table Storage importer, I received a note from Cerebrata, a company that sells a commercial Azure management tool called Azure Management Studio.  They recommended I try it out, so I downloaded the 30 day trial and here is my experience essentially doing the same task of importing a CSV file into table storage.

Registering the Storage Account

Adding a storage account is easy – you just provide your Storage Account Name and Key.

image

Creating a Table from CSV File

In my previous blog post, I wanted to drop in a large CSV file into Windows Azure Table Storage, so I wanted to try the same approach using the Azure Management Studio.  Using the tool, you can create a table from an existing CSV file.

image

You can then choose the Partition Key from your existing fields and your Row Key from either your existing fields or a generated unique identifier:

image

Performance

Running the tool from my laptop at home to Windows Azure (e.g. high latency), the performance was very slow at about 1600 records per minute.  At almost 500K records, this would take more than five hours to import the records!

So I used the same technique as I did with my custom tool – I installed the tool on my demo virtual machine running in Azure to reduce the latency.  Using my custom tool, 500K records took about 5 minutes and other developer efforts have been able to demonstrate scaling up this approach to get 22,500 row inserts per second using multiple instances running of a custom developed importer. 

Running the commercial tool from my VM made a big difference.  On average, Azure Management Studio processed just under 7,000 records per minute.  

In comparison to my custom tool, this is significantly slower – each of my console apps in debug compilation was able to import at a rate of approximately 20,000 records per minute and I could run five instances at once – this meant I was processing 100,000 records per minute – an almost 14x improvement in performance. 

image

This is what the VM looked like while the application was running at full tilt.  The Azure Management Studio app was only using 1-2% of the available CPU power and barely taxing the network.

image

image

image

Read More

Using Power Query and PowerView to Understand Your Email Patterns

One of the data sources that you can connect to with the latest Power Query Add-On for Excel is Exchange. 

SNAGHTML44640f7

When you select this data source, you use your exchange credentials to connect to Exchange and this creates a query that gather data from your mail, calendar, people, tasks and meeting requests.

image

If you click on any of these tables, you can pull your inbox into Excel and visualize it using PowerView.  Here are a few examples.

As with any Power Query query, you can direct your query to either an Excel worksheet or a PowerPivot model.  I always use a PowerPivot model because it can handle a much larger number of rows (worksheet maxes out at around 1 million) and because we can then incorporate other types of data into the model.

image

When Power Query displays the preview, you will notice that Columns for Sender, ToRecipients, CCRecipients, etc. show values like “Record” and “Table”.  This is because

The record for each email is an entity with multiple tables and embedded records in it.  Before you run your query, click on the columns you want to expand to the retrieve the sub-records within each email by clicking on the expand symbol on the right of the column.

image

image

Once we have the raw data, we need to create a time dimension table so we can group dates by weeks, months, years, etc.  One of the key columns is DateTimeReceived which provides the date the email arrived in your inbox. 

Using a Date Table, we can aggregate dates by month and visualize using PowerView!

image

Read More

New Feature Coming to Office 365: Encrypted Emails

In the first quarter of 2014, Office 365 will provide the ability to send encrypted emails using the new version of Exchange Hosted Encryption. 

The service allows administrators to configure rights management services that prescribe how emails are managed by Office 365 users. Rules include encrypting emails so that they can only be decrypted by the recipient of the email. 

The email recipient doesn’t need to be an Office 365 user – it can be anyone with an email account.

The encryption approach uses multiple methods to encrypt the tunnel, the connection, and the content of the email.

Read More

Fixing the SharePoint Napa Tutorial to Read Data From Your Own Site

Office 365 provides a set of online coding tools for building SharePoint Apps called “Napa”.  The Napa tools provide a web based coding environment for building custom Office apps including SharePoint Apps.  These apps can be deployed to a testing site and then published to your app catalogue for use by your end users.

The basic Napa tutorial provides an example of how to read, create and delete SharePoint lists. 

If you code the basic JavaScript provided and run the app, you will see the following results:

image

The list of lists is coming from the context of the App, not the original SharePoint site that called the app.  This is why if you create a new list through the app, you won’t be able to easily find it. 

This is clearly not what we would want in the real world – we would want our app to interrogate the SharePoint site in which our app resides, not some artificial application context!  How do we get this to work?

After researching a few message boards, blogs and tutorials, I found a few bits of code that will help.  The following code is the same basic Napa application but with the context set to the host SharePoint site.  Below is the explanation as to what is going and how to create and publish the app.

//comment strict out because will not work with IE10
 // 'use strict';

//define the variables
     var hostUrl 
     var context
     var hostcontext
     var web
     var user
     var message = ""
     var lists

 //make sure we get SharePoint Ready after the browser DOM is ready
 $(document).ready(function () {
     SP.SOD.executeFunc('sp.js', 'SP.ClientContext', sharePointReady);
     $("#getListCount").click(function (event) {
        getWebProperties();
        event.preventDefault();
    });

    $("#createlistbutton").click(function (event) {
        createlist();
        event.preventDefault();
    });

    $("#deletelistbutton").click(function (event) {
        deletelist();
        event.preventDefault();
    });

 });

// This function creates a context object which is needed to use the SharePoint object model
 function sharePointReady() {
     var hostUrl = decodeURIComponent(getQueryStringParameter("SPHostUrl"));
     context = new SP.ClientContext.get_current();
     hostcontext = new SP.AppContextSite(context, hostUrl);
     web = hostcontext.get_web();
     loadWebTitle();
     loadUserName();
     displayLists();
 }

function onGetWebSuccess() {
    message = message + "The title of the host web of this app is " + web.get_title() + ". ";
    updateMessage();
}
 
function loadWebTitle()
{
    context.load(web, "Title");
    context.executeQueryAsync(onGetWebSuccess, onGetWebFail);
}
 
function updateMessage()
{
    $('#message').text(message);
}
 
 // This function prepares, loads, and then executes a SharePoint query to get the current users information
function loadUserName() {
     user = web.get_currentUser();
     context.load(user);
     context.executeQueryAsync(onGetUserNameSuccess, onGetUserNameFail);
}

// This function is executed if the above call is successful
 // It replaces the contents of the 'helloString' element with the user name
function onGetUserNameSuccess() {
    message = message + "Hello " + user.get_title() + ". ";
    updateMessage();
}

 // This function is executed if the above call fails
function onGetUserNameFail(sender, args) {
    alert('Failed to get user name. Error:' + args.get_message());
}

function onGetWebFail(sender, args) {
    alert('Failed to get lists. Error:' + args.get_message());
}

function getQueryStringParameter(param) {
     var params = document.URL.split("?")[1].split("&");
     //var strParams = "";     
     for (var i = 0; i < params.length; i = i + 1) {
         var singleParam = params[i].split("=");
         if (singleParam[0] == param) {
             return singleParam[1];
         }
     }
}
    
function getWebProperties() {
    // Get the number of lists in the current web.
    context.load(lists);
    context.executeQueryAsync(onWebPropsSuccess, onWebPropsFail);
}

function onWebPropsSuccess(sender, args) {
    alert('Number of lists in web: ' + lists.get_count());
}

function onWebPropsFail(sender, args) {
    alert('Failed to get list. Error: ' + args.get_message());
}

function displayLists() {
    // Get the available SharePoint lists, and then set them into 
    // the context.
    lists = web.get_lists();
    context.load(lists);
    context.executeQueryAsync(onGetListsSuccess, onGetListsFail);
}

function onGetListsSuccess(sender, args) {
    // Success getting the lists. Set references to the list 
    // elements and the list of available lists.
    var listEnumerator = lists.getEnumerator();
    var selectListBox = document.getElementById("selectlistbox");
    if (selectListBox.hasChildNodes()) {
        while (selectListBox.childNodes.length >= 1) {
            selectListBox.removeChild(selectListBox.firstChild);
        }
    }
    // Traverse the elements of the collection, and load the name of    
    // each list into the dropdown list box.
    while (listEnumerator.moveNext()) {
        var selectOption = document.createElement("option");
        selectOption.value = listEnumerator.get_current().get_title();
        selectOption.innerHTML = listEnumerator.get_current().get_title();
        selectListBox.appendChild(selectOption);
    }
}

function onGetListsFail(sender, args) {
    // Lists couldn’t be loaded - display error.
    alert('Failed to get list. Error: ' + args.get_message());
}

function createlist() {
    // Create a generic SharePoint list with the name that the user specifies.
    var listCreationInfo = new SP.ListCreationInformation();
    var listTitle = document.getElementById("createlistbox").value;
    listCreationInfo.set_title(listTitle);
    listCreationInfo.set_templateType(SP.ListTemplateType.genericList);
    lists = web.get_lists();
    var newList = lists.add(listCreationInfo);
    context.load(newList);
    context.executeQueryAsync(onListCreationSuccess, onListCreationFail);
}

function onListCreationSuccess() {
    displayLists();
}

function onListCreationFail(sender, args) {
    alert('Failed to create the list. ' + args.get_message());
}

function deletelist() {
    // Delete the list that the user specifies.
    var selectListBox = document.getElementById("selectlistbox");
    var selectedListTitle = selectListBox.value;
    var selectedList = web.get_lists().getByTitle(selectedListTitle);
    selectedList.deleteObject();
    context.executeQueryAsync(onDeleteListSuccess, onDeleteListFail);
}

function onDeleteListSuccess() {
    displayLists();
}

function onDeleteListFail(sender, args) {
    alert('Failed to delete the list. ' + args.get_message());
}

 

Setting the Context to the Host App

The key difference in approach in this version of the app is the setting of the context to be the host URL.  The host URL is stored in the query string passed into the application and is decoded using the function decodeURIComponent.

var hostUrl = decodeURIComponent(getQueryStringParameter(“SPHostUrl”));

We then reset the context to be from the hostURL instead of the original application site context:

hostcontext = new SP.AppContextSite(context, hostUrl);
web = hostcontext.get_web();

Testing the Basic Context

We created two basic methods to test that we have the context set appropriately.  loadWebTitle() grabs the title of the web and loadUserName() grabs the current user name. 

The Rest of the Original Napa App

The original Napa tutorial application displays all the SharePoint lists in the current web context.  This now works but reflects our new context based on the host SharePoint site.

Setting the Right Permissions

Now that we are reading list information from the host app, we need to request the appropriate permissions from the host app.  Under the settings there is a permissions tab that allows you to set up the right permissions to request when deploying your app.

image

In order to read the lists, the Web permission needs to be set to “Read”.  In order to delete a list, the Web permission needs to be set to “Manage”. 

When you deploy your app to your site, you will be asked whether you trust your app and will be shown the permissions being requested.

image

The Result

Here is the result based on my development site.  As you can see, it now shows all the lists in my site. 

image

In addition, if I create a new list it now shows up as expected in the host site.

image

image

Read More

Understanding the Azure IAAS Security Architecture

There is a new whitepaper that has been published by Microsoft to explain the security architecture features of Windows Azure.  You can download it here.

The key security concepts and features within the Windows Azure Infrastructure as a Service are highlighted within this whitepaper.

image

Virtual Security, not Physical Security

One of the key security safeguards in the traditional infrastructure world is physical isolation of networks and servers.  In the cloud world, this no longer exists because all the servers are virtual. 

This applies across multiple customers as well – you have no guarantee that your servers are not on the same server as some other customer. 

This is the key challenge in overcoming traditional security policies, attitudes and approaches because most of them written years ago prescribe physical network isolation and/or physical environment isolation. 

As we move to cloud based infrastructures, we will need to find other ways to ensure isolation of environments beyond physical boxes.

How does Windows Azure protect your environment from other customers?  The answer is two fold: 1) network traffic between VMs is highly secured and managed by the Windows Azure Service; 2) the Windows Azure Service is highly secured and protected from customer VMs themselves through “multiple layers” of security.  

Virtual Network Isolation within a customer

Azure supports two concepts of virtual network isolation.  A “Deployment” is an isolated environment that allows VMs within this deployment to talk to each other through private IP addresses.  A “Virtual Network” allows for communication between deployments through specified IP channels and is isolated from other virtual networks.

Isolating Traffic Coming from the Internet

By default, every VM has traffic blocked from the Internet except for remote management ports.  When you create a VM, you add additional endpoints if you deem them appropriate – for example, you could add an FTP endpoint that accepts traffic through port 21 for sending and receiving files.

In addition to specifying the end points, administrators can further restrict endpoints through additional rules such as IP access control lists or only allowing traffic from a site to site VPN.

The key statement here is this one:

If an application exposes input endpoints, it should follow the same security model as if it were running open on the Internet.

Be Careful about Traffic Crossing Regions

Communication across regions (e.g. servers in North America shuttling data to servers in Europe) is deemed as less secure than within a region:

If an application sends or receives any sensitive data across Windows Azure regions, then communications must be encrypted. Cross-region traffic transits over a WAN and is more open to interception.

Administrators should be concerned about this specific scenario because some will be looking to use multiple regions to promote increased high availability and disaster recovery.

Integrating Azure Networks and Internal Corporate Networks

The recommended approach to integrating Azure networks and internal corporate networks is to use the provided Windows Azure Virtual Network Gateway:

image

The gateway establishes a IPsec tunnel (e.g. encrypted and controlled) between the Azure environment and your corporate network’s VPN device.

Read More

Creating a Custom Windows Azure Table Storage Importer

As mentioned in my previous post on Windows Azure Table Storage, I put together a basic importer for moving records stored in CSV files into Windows Azure Table Storage.

I have uploaded the code to GITHUB here.  Feel free to borrow and repurpose the code as you see fit as a learning tool.  This is not production class code but sufficient for an initial proof of concept.

There are a few sources of information that I was able to use to build my tool that include:

This article walks through some key ideas on how the utility works and how I architected and developed the solution.

Basic Requirements

The requirements for my import utility are as follows:

  • Read a CSV file as a list of records
  • Identify a partition key and row identifier key for each record
  • Insert each entity into Windows Azure Table Storage

Setting Up the Environment

I used Visual Studio 2012 and the latest Windows Azure SDK.  I used Visual Studio Online to manage the source code.  For testing, I used initially the Windows Azure Storage Emulator locally and then tested in the cloud using a live Windows Azure Storage Account.

Basics of the Architecture

I created two projects: 1) a console application called “WindowsAzureTableStorageImporter” used to read in CSV files and manage the overall process and 2) a class library called “WindowsAzureTableStorage” that contains the key classes for managing transactions with Windows Azure Table Storage.

Console App: Configuration Parameters

In designing the console application, I added a few configuration parameters that are used to configure the import process:

  • StorageConnectionString: ConnectionString for WindowsAzureStorageAccount
    FileName: Path to CSV File
    PartitionKeyField: Name of one of the fields in the CSV to use as a partitionkey
    RowKeyField: Name of one of the fields to use as a rowkeyfield – if this is not supplied then importer will use Guid.NewGuid() to generate a new ID
    AzureTableName: Name of the table to create and add the entities
    MaximumRowsToImport: Maximum number of rows to load into Azure – if this is left out or less than 0 the program will load in all rows available.
    StartingOffset: Starting row to start adding from the CSV file.
    MaximumTasks: Maximum number of async requests to create before waiting for them to finish.

The console application is responsible for reading in these configuration parameters and passing them to the WindowsAzureTableStorageService class as appropriate.

Console App: Reading in CSV File

I borrowed a publically available package for reading in CSV files.  This package works really well, it’s fast and its configurable.  In my version, I assume a very basic CSV file with headers in the first row and comma as a field delimiter.  The CSVHelper package supports configuration options to change these assumptions and do mapping of incoming fields to in memory entities. 

Reading in the file is pretty simple – we locate the CSV file and read through the file row by row.  For each row, we identify the partition key, the row key and any additional fields and add them to a DictionaryTableEntity.

// open the file and start reading
StreamReader textReader = File.OpenText(fileName);
CsvReader reader = new CsvReader(textReader);

while (reader.Read())
{
    if (reader.Row-1 < startingOffset)
    {
        // Row -1 is because the header
        // do nothing – move read to the startingOffset
    }
    else
    {
        // populate an entity for each row
        DictionaryTableEntity entity = new DictionaryTableEntity();

        if (rowKeyField == null)
        {
            entity.RowKey = Guid.NewGuid().ToString();
        }

        foreach (string field in reader.FieldHeaders)
        {
            if (field == rowKeyField)
            {
                entity.RowKey = reader[field];
            }
            else if (field == partitionKeyField)
            {
                entity.PartitionKey = WindowsAzureTableStorageService.createValidPartitionKey(reader[field]);
            }
            else
            {
                string value = reader[field];
                entity.Add(field, value);
            }
        }
        if (entity.PartitionKey == null)
            throw new Exception(“Bad data record. Partition key not found.”);

        entities.Add(entity);
        if (maximumRowsToImport > 0 && entities.Count == maximumRowsToImport)
            break;
    }

I added in the concept of an startingOffset and a maximumRowsToImport to limit the rows stored.  This is helpful for testing in that we can take a file that has 500,000 rows but only test with rows 100-200.  This is also helpful as we will see in running multiple instances of the importer all importing records but from different ranges of the file.

DictionaryTableEntity

In order to load our rows, we need an object to store each row.  Windows Azure Table Storage provides an interface called ITableEntity that can be used to create a custom class that represents the entity to be added to the table. 

In this case, the key requirement is to support a dynamic list of properties since we don’t know what fields are in the CSV file.  After reading through a few suggestions, I created a DictionaryTableEntity class which implements the ITableEntity interface as well as the IDictionary interface and provides the functionality similar to a Dictionary List where you can add any field as a name value combination like so:

entity.Add(field, value);

In this case, the field is determined based on the CSV file layout and the value is the corresponding row value for each record.

We can store these entities in a simple list and we pass the list of entities to the WindowsAzureTableStorageService to be processed as a series of batches.

WindowsAzureTableStorageService: Creating a Table

Creating a table is easy – you connect to the storage account and create the table.

// Retrieve the storage account from the connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(storageConnectionString);

// Create the table client.
CloudTableClient tableClient = storageAccount.CreateCloudTableClient();

// Create the table if it doesn’t exist.
CloudTable table = tableClient.GetTableReference(TableName);
table = tableClient.GetTableReference(TableName);
table.CreateIfNotExists();

In this case we create the table if it doesn’t already exist.

Important Note on Deleting Tables: if you delete a table either through Azure Storage Explorer or through code, it performs the action asynchronously.   According to Microsoft it can take up to 40 seconds for the actual delete to happen and if you try to create the table while its in progress of being deleted, you will receive a StorageException with a “409” error.

WindowsAzureTableStorageService: Adding Batches

The Table Storage API allows for adding entities one by one or in batches.  Given that we have hundreds of thousands of records, using batches is going to be much faster because we can add up to one hundred records in a single call.  Each call is done through HTTP so its reasonably slow, so the fewer calls we can make the faster our import will be.  In addition, transaction costs are calculated per call, so adding records in a batch is considered one transaction from a billing perspective.

The key rules for adding a batch in Windows Table Azure Storage are:

  • A maximum of 100 entities can be added in a batch
  • All the entities in a batch must have the same partition key

So the basic logic of the AddBatch method is as follows:

  • Sort the list of entities by PartitionKey so that we can group by PartitionKey into batches.
  • When we hit 100 entities, commit the batch and start a new one.
  • When we hit a new PartitionKey, commit the batch and start a new one.

WindowsAzureTableStorage: Speeding Up By Using ASYNC

In the original AddBatch method, I used a synchronous call for each ExecuteBatch call.  However, this is quite slow especially with a high latency connection to the Azure data center.

We can speed this up by using an asynchronous call to the Windows Azure API.  The idea is that instead of waiting for the HTTP call to finish processing, we’ll start on the next batch and make another HTTP call.  Instead of having only one connection at a time, we can have hundreds of connections open all processing batches in progress.

The Windows Azure API supports the new .NET ASYNC pattern through the ExecuteBatchAsync method.  The strategy that is employed in my code works by taking all the batches that have been organized and for each one creating a ASYNC task that returns immediate but is now processing in the background. 

.NET now also provides the ability to wait for all these asynchronous tasks to complete and we bubble this waiting up to the console app so that it doesn’t shut down before all the tasks are complete.  The way it works is through a .NET method called Task.WhenAll which takes all of the tasks you have created and creates a single task that represents all of those tasks and waits for them to all complete. 

foreach (TableBatchOperation batch in batchOperations)
{
batchCount++;
taskCount++;
Debug.WriteLine(“Adding batch ” + batchCount + ” of ” + batchOperations.Count);
Task<IList<TableResult>> task = table.ExecuteBatchAsync(batch);
batchTasks.Add(task);
if (MaximumTaskCount > 0 && taskCount >= MaximumTaskCount)
{
  Debug.WriteLine(“Maximum task threshold reached – waiting for    existing tasks to finish.”);
  await Task.WhenAll(batchTasks);
  taskCount = 0;
  }
}
await Task.WhenAll(batchTasks);

One other feature I added was the ability to set the maximum number of asynchronous processes created at a time.  If we have 500,000 records to import, we could have potentially thousands of batches.  In my testing especially where there was a high latency connection, creating thousands of requests would end up overloading the network and timing out one of the connections.  So instead, we have a configurable parameter that throttles the number of asynchronous calls in flight to a maximum number and if it reaches the maximum it waits for them to finish before starting up a batch of calls.  In my testing, having 200 asynchronous calls in flight at once worked quite well and was reasonably fast – having 500 tended to create random timeouts from my home network connecting to Azure. 

In our console application, we get this task returned and we simply wait for it to complete:

var task = tableStorageService.AddBatchAsync(“test”, entities, maximumTasks);
task.Wait();

In our console application, we wait until all the batch insert tasks are completed and then we’re done processing.

Speeding Up Using Windows Azure VMs and Multiple Console Instances

There are a couple ways to speed up the import process further:

  • Remove the latency through using a Windows Azure virtual machine to run the importer close to the Table Storage network.
  • Run multiple instances of the importer to force usage of more available cores.

I created an extra large VM which has 8 cores and 14 gigs of RAM and deployed my code to that VM.  I created 5 instances of my console app and set each one to process the same CSV file but at different starting offsets – the first one imports records 0-100,000, the second imports 100,001 to 200,000 and so on. 

image

We can now run five instances of the application, each with 200 asynchronous calls plowing data into the same table.

Here is what the applications look like in the Task Manager – as you can see the CPU hits about 10-15% and memory hits about 15% running all 5 instances at once.  We could in theory run 20-30 instances and gain even faster performance.

image

Loading in 500,000 records takes about 5 minutes.  This is pretty good but slow in comparison to the results reported by Troy Hunt who was able to demonstrate 22,500 rows being inserted per second!

Read More