Top 10 2015 Predictions for Microsoft Azure

Like Office 365, Microsoft Azure has undergone a lot of change in the past year.  Throughout 2014, it seemed like there was a new service being released in either preview or production release every few weeks.   Looking forward to 2015, here is list of predictions for Azure in 2015.

10. Continued Focus on Security and Encryption

One of Microsoft’s key differentiators as an enterprise cloud provider is its focus on security.  Microsoft already has many of the key certifications, public commitments to enterprise grade security, etc. and in 2015 expect the investments to continue. 

9. Microsoft Shrinks the Market Share Gap with Amazon

Amazon is still the dominant cloud supplier and will continue to be in 2015.  However, in 2014, Microsoft’s growth rate outpaced Amazon and I think we’ll see the same in 2015 as Microsoft closes the gap in market share. 

cis q114

Also watch for data center expansion in 2015 as a way to take away market share from Amazon – Microsoft has launched now in China and Australia for example and this geographic expansion will continue as we move into 2015.

8. New Versions of Visual Studio, ASP.NET, .NET Framework

While not technically an Azure feature, the new releases of Visual Studio, ASP.NET and the .NET framework will further integrate, support and provide new ways of building solutions for Microsoft Azure.  The new Azure SDK provides better diagnostics, improved deployment tools for Azure, support for Blob storage, improved HD Insight support and the new version of Visual Studio supports Azure connected services, enterprise SSO, code analysis for Azure, and publishing to Azure integration.


7. Services in Preview Launch as Production Services

There were many new services launched in 2014 for Microsoft Azure in preview including:

  • Improvements to Azure SQL
  • Better storage and shared file systems
  • A number of big data services (Batch, Machine Learning, Data Factory, Storm, Stream Analytics, etc)
  • Live media streaming
  • NoSQL document based database
  • Azure Search
  • Site recovery through replication

In particular the big data services are a major strategic investment for Microsoft and a key differentiator in the fight with other cloud service providers for market share.  Expect these to be promoted to enterprise class production ready services and a LOT of marketing and promotion around them in 2015.

6. Microsoft Struggles with the Cannibalization of SQL Server

Microsoft’s number one product is SQL Server with a massive $6 billion in revenue for Microsoft.  The SQL Server business grew by 11% in 2014, but Azure growth was more than 100%. 

Microsoft has a fundamental problem with SQL Server as it pivots towards the cloud:

  • The cost of SQL Server running in the cloud through IAAS is quite expensive, especially compared to NOSQL alternatives.  I can run a basic Windows VM for as little as $14 / month, but installing SQL Standard drives that price up to $315 / month or $1,1777 / month for SQL Enterprise.  The cost of a real enterprise class SQL cluster running through Azure on IAAS would cost thousands of dollars per month when you account for high availability and clustering requirements.
  • Microsoft has Azure SQL, which is a PAAS based offering effectively competing with traditional SQL Server running either on premise or in the cloud.  Azure SQL is significantly cheaper than running a full SQL Server license especially for smaller databases. 
  • Microsoft has multiple NOSQL alternatives including Hadoop, Table Storage, and Document DB.  Each of these services can replace a traditional SQL database in certain scenarios.

In the same way that Office 365 has cannibalized on premise implementations of SharePoint, Exchange and Office, Azure will start to cannibalize all those SQL Server databases running on premise.  It will happen slowly because of the nature of migrating any kind of database but over the next 3-5 years expect the shift to become more visible. 

Microsoft’s struggle in 2015 will be how to position its traditional SQL Server business (in particular all of those customers being sold on upgrades to SQL 2014) vs. customers who might start to look at moving to NOSQL alternatives as they move to the cloud.

5. Microsoft Continues to Pivot on Open Source, Linux and Partnerships

Microsoft will continue its pivot on embracing Linux, open source, and other non-Microsoft partnerships such as SalesForce, IBM, Oracle, Dropbox, etc.  Microsoft has been busy in 2014 open sourcing a number of their core platforms including ASP.NET and big chunks of the .NET framework.  They have also moved to a cross-platform model for the .NET framework.  Microsoft has also announced partnerships with SalesForce and Dropbox to support integration between Office 365 and Azure.

Expect this to continue and expand in 2015 as Microsoft moves from a proprietary software company to an open cloud services company.  Microsoft has recognized that their future is hosting EVERYTHING, not just Microsoft designed and engineered products and this will continue to expand in 2015.

4. Price Wars

Microsoft and all the other major cloud providers are in a massive price war which is driving prices down.  If you had purchased basic cloud storage in 2012, you would have paid $0.14 per GB.  In 2013, that price was $0.07 per GB.  At the end of 2014, it’s as low as $0.03 per GB.   Similarly, an A3 VM would have cost $0.48 per hour in 2012 and is now running at $0.32 per hour.

Expect the price drops to continue in 2015 as Microsoft competes with Amazon and others for market share and the economies of hardware, storage, etc. continue to improve over time. 

3. Hadoop and Big Data Go Mainstream

Big data has been a buzzword in the industry for the past several years, but it has been more hype in many cases than practice.  Hadoop became the key platform for big data in 2014 with Microsoft embracing it as its core platform.  Hadoop providers have received massive investments and their revenue is expected to grow in 2015 by 60% year over year.

However, there have also been significant barriers to adoption and CIO’s have been slow to commit to big data platforms.  2014 was a year of many proof of concepts, investigations and hype demos but also lots of concerns, trepidations and adoption challenges for big data technologies.

In the same way that cloud saw massive growth in 2014 as it went from hype to mainstream, I see 2015 as being a pivotal year in the big data story as CIO’s start to move from hype, research and proof of concept stage to mainstream use of these technologies.

In addition, as Microsoft’s new big data services come online such as Batch, Storm, Stream, Machine Learning, etc. this will start to reduce the complexity of engineering big data services and move the market forward to embrace the big data promise with less of the need for data scientists and PHD computer scientists to figure it out. 

2. The Resurgence of PAAS

Microsoft originally championed the cloud as PAAS and quickly had to backtrack into the IAAS business as Amazon took the market.  However, running virtual machines in the cloud in the same way as on premise is not economical in the long run because you still end up owning the costs for maintenance, patching, upgrades, etc.  Microsoft has gone head to head with Amazon and other IAAS vendors and can now compete well in basic VM hosting – however, this is ultimately a race to the bottom as the prices continue to drop. 

The real differentiator for Microsoft in the long run is PAAS – it has the development tools, the APIs, the stacks and the developer community to making running your own virtual machines seem as antiquated as running them on premise.  Some key changes to Azure that have happened in 2014 will make PAAS an increasingly compelling option in 2015:

  • Azure Web Sites as a low cost, high scale option for running public facing web sites is a very attractive option over running your own servers.  Scalability of the Azure Web Site offering is already really good.
  • Azure SQL continues to drop in price while increasing in features and performance.  The new version of Azure SQL (currently in preview) brings almost complete compatibility as well as improved performance to the existing service. 
  • Microsoft’s new big data offerings are all PAAS services and they provide quasi control over how many VMs, instances, and performance is provided to scale as needed while not requiring any management of the underlying infrastructure.  For example, the new Azure Batch provides access to pools of VMs on demand with no need to maintain them – the service manages them as a generic pool including provisioning and deprovisioning.

For most customers, PAAS will provide a more economical, easier to maintain and scalable service than building your own virtual infrastructure using IAAS.  As additional services and finer grain control over PAAS services in terms of dedicated performance units, scalable tiers, etc. the case for IAAS based services is being undermined by easier to use PAAS services.

1. Performance Challenges and Opportunities

One of the key challenges and opportunities we have seen with Azure is performance and scalability.  For example, running SQL Server on IAAS has been a challenge because of poor I/O performance.  We have also seen really good scalability from Azure Web Sites, especially with the Shared Tier. 

The bottom line is that a cloud based VM isn’t the equivalent of a VM running on premise for lots of different reasons.  The I/O performance tends to be poorer, the network latency is harder to control, and if Microsoft is selling performance on Azure SQL as a set of “Dedicated Performance Units” which don’t map well to traditional servers.  Before launching any cloud service, performance testing is a must to ensure that your particular scenario will scale as expected and perform economically.  In some cases, the cost of the scalability is dirt cheap (for example, scaling up an Azure Web Site or load on an Azure SQL database) where in other cases it can be quite costly (for example, scaling up VM’s running SQL Server in IAAS). 


Microsoft has been introducing new service tiers that start to address some of these challenges including:

Expect more of these types of improvements as we move into 2015 as customers to start to leverage Azure for performance demanding workloads. 

Read More

If You Are Running WordPress on Azure, Turn on Photon!

Automattic, the company that maintains WordPress, also maintains a widely used plugin called Jetpack.  Within Jetpack, there is a feature called Photon that you can activate that automatically migrates all of your images to’s content delivery network.

With that one click, all of your images are now served from a free CDN that will speed up performance world wide and offload traffic from your Azure Web Site subscription.
  For those of you running large scale or “premium” WordPress sites on Azure, this may be one way to avoid having to scale up your hosting plan in order to maintain performance around the world. 

Read More

Microsoft Azure CDN Rolled Back Due to Lack of Cache Control

Last night, I turned on Azure CDN for the first time.  It worked great from a performance perspective, but I ran into a significant limitation that meant I had to roll-back and turn it off – lack of cache control.

When I launched the web site through Azure CDN, the home page was cached – permanently.  New blog posts weren’t showing up on the home page!

It turns out that WordPress, by default, does not include any HTTP CACHE-CONTROL settings for pages.   When launched under CDN, the home page was therefore cached without any expiry date or directive to not cache. 

There is no ability to invalidate, control or set expiry rules on cached objects within Azure CDN either through the Azure Portal or programmatically.  There also doesn’t seem to be a way to re-issue the caching directive because the page is now cached with the original content.  I tried recreating the CDN from scratch but that just recreates the end-point – the cache itself sticks around and there is no invalidation even when deleting and creating the CDN end-point.

The default time to live for Azure CDN is 7 days.  In the meantime, it appears I’m stuck and I have rolled back to the original Azure Web Site configuration, remapped my DNS entry back to the standard web site and will await until my cache refreshes!

In the meantime, there is something that we can all do – check out the Azure Feedback Forum and vote for a change to Azure CDN to provide better control over cache expiry and invalidation.  You can vote for the feature and it is already the number one request on the feedback board.  The official status for this feature is “planned” but there is no release date for the feature.  


Read More

Speed Improvements from Migrating to Azure CDN

This web site runs on WordPress on Microsoft Azure Web Sites.  As traffic has been increasing, I have been looking for ways to improve performance.  One of the key challenges with a global web site such as this blog is serving requests from around the world. 

As an experiment, I have migrated the entire web site to Microsoft’s Content Delivery Network.  Microsoft CDN provides direct integration with Azure Web Sites and migrating to it was as simple as turning it on and then re-pointing the domain to the CDN URL instead of the Azure Web Site URL. 

To turn on CDN – you create a CDN service and point it at your existing Azure Web Site or Storage Account.  In this case, I pointed it to the existing web site.  This creates a globally cached network of servers that are much closer to your end users than your single data center. 

The next step is you need to re-point your DNS entry in your URL provider (I use to the CDN end point instead of your existing web site (It took about 90 minutes for the DNS to re-propagate – until then I saw a 404 page).

Using a speed testing site called DotCom Monitor, I tested the CDN version of the site and the non-CDN version.  The results are below.

If you look at locations like Japan, India, United States, France, Poland, etc. they are all now significantly faster.  In drilling down into the waterfall chart provided, it shows that just loading the bare home page in India goes from 5.84 seconds to load down to 1.22 seconds.  Similarly, from Amsterdam the time it takes to load the bare home page shrinks from 1.15 seconds down to 346 ms. 

Site without CDN


Site Running Through CDN


Read More

Azure Web Sites Performance Analysis – Hosting Plans Compared

With the many options in scaling up an Azure Web Site, I wanted to understand the different options and the type of performance I could expect from each option.  I created a simple performance test using the Bakery site which is a basic ASP.NET web site template that has a sample store.  It features a catalogue with a few items in it that you can then place an order. 

The default out of the box implementation uses SQL Compact as its database, which has the database running through the file system.  Using only a single user hitting the page, the average page load time is about 300-400 ms.  I also tested migrating the database to a proper SQL Azure database and the average load time dropped to 30-50 ms, an almost 10x improvement in performance. 

In order to test multiple users hitting the site, I created a performance test that hit the site using the open source tool JMeter.  The test was a basic test to see how fast the pages could be retrieved under load.   I tested different loads using 1, 2, 4, 8, 12, 20, 50, 100, 200 and 500 concurrent users.  Tests were run from within an Azure VM so the latency was very low.

Here are the results using a number of different concurrent threads to test the scalability of each option.  The summary of the results can be found here in this slideshare presentation.



Free worked quite well running up to 20 concurrent users.  However, at 50 threads, the response time doubled and within a couple minutes the Data Out quota was reached and the entire site was disabled!

Free is really only useful for testing – with only 165 MB per day in the Data Out quota, you’re going to run out even with the most basic web site under any reasonable load.


Using the SQL Compact edition, Shared still scaled up really well.  Even at 50 concurrent users, the average response time was only 400 ms.  However, at 100 threads we started to see slow down where response time increased to an average of 1290 ms.  At 200 threads, the slow down was even more pronounced at 1755 ms.

If you look at the Azure dashboard, you can see as the tests run there were 8369 requests in one minute and it barely broke a sweat!


Running using Azure SQL improved scalability even further.   It could easily handle 100 concurrent users without any noticeable change in performance. 

However, one of the key limitations for Shared is the quota limits that are imposed on your site.  While Shared can handle spikes in traffic quite nicely, you need to be careful about exceeding the quotas. 

Running a simple ASP.NET page using the SQL Compact database, I was able to exceed my CPU quota in less than 5 minutes running at 50 concurrent users and then my site was disabled.  Running the more optimized SQL Azure database, I could do the same at 200 concurrent users.

The key quota limit is CPU time – it resets every 5 minutes and limits your site to “2.5 CPU minutes”.  Essentially, if you have too much traffic in a 5 minute period your site is disabled. 


Shared is still the best option of all the tests for handling short spikes in performance as long as your site doesn’t exceed your usage quota.

Basic Small

Basic has 3 configurations – Small, Medium and Large.  We tested each of these configurations under load.

Basic Small was significantly slower performance than Shared in all scenarios.  If you look at the graph from the Azure monitor, you can see the difference in requests being handled from the time the site was in Shared vs. the time it is in Basic Small.


In Shared, we peaked at 8300+ requests per minute even running under SQL Compact, while in Basic Small the site was barely managing 250 requests per minute.  By the time we load up to 50 users, the Basic Small site started generating errors from the SQL Compact database because it couldn’t handle the requests fast enough.

Running using a SQL Azure database, Basic Small works better but still suffers from scalability issues.  At 20 concurrent users, the average load time was 93 ms compared to an unloaded 30 ms.  At 50, 100 and 200 concurrent users, the performance got progressively worse.

Basic Medium

Basic Medium fairs much better than Basic Small – it consistently delivers reasonably good results even at higher loads.  As you can see by the following graph, Basic Medium peaks at significantly higher requests per minute than Basic Small under all test scenarios.


As a result, Basic Medium performance is stable at around 400-450 ms when running the SQL Compact database to load on average until 20 concurrent users and then starts to increase slowly from there.  When running SQL Azure, We could scale to 50 concurrent users without a significant decrease in performance. 

Basic Large

Basic Large is the first instance that we tested that had faster average load times than Shared and Free.  It also scales much better than Basic Small or Medium, where it stays at an average of 300 ms even with 20 concurrent users running under our SQL Compact database.  Running under SQL Azure, Basic Large performed well up to 20 concurrent users and then performance slowly degraded as we increased to 50, 100, 200 and 500 concurrent users.

Standard Small

Switching to Standard provides you the same options as basic in terms of size of VM but with the additional ability to scale out the number of instances from 3 to 10 and the ability to auto scale up when your instance becomes bogged down. 

The performance of Standard Small is a little bit better than Basic Small.  In my test with Basic Small, we started seeing errors with 50 concurrent users.  With Standard Small, the performance is very slow but manages to get through the test.  However, at 100 concurrent users, Standard Small also fails by generating errors.

Standard Medium and Standard Large

Standard Medium did similar performance to Basic Medium.  Standard Large is rock solid with the best scalability of all the options.  At 200 concurrent users, the instance is still consistently delivering.  Performance is about the same as Basic Large.

Shared X 3

Using Shared, you can increase the number of instances up to 10.  What happens if you scale up to 3 instances – do you get 3x the performance?

Running multiple instances meant that only SQL Azure was supported – SQL Compact won’t work in this scenario at all because it is local on a file system.  Running Shared on 3 instances, the speed to load a page was stable at 30 ms, even when running with 200 concurrent users!

At 500 concurrent users, I was able to again exceed by quota after a couple minutes of running at that speed.  However, in that time I was able to generate almost 50,000 page views running on three shared instances.

Basic Medium X 3 and Standard Medium x 3

Basic Medium or Standard Medium running on 3 instances can handle a LOT of traffic.  Both were rock solid running my test up to 50 concurrent users with an average load time of 33 ms.  With 100 and 200 concurrent users, there was degraded performance but it was reasonable – about 50-60 ms.  Even with 500 concurrent users, performance was still a respectable 124 ms. 

Standard Medium running on 3 instances performed at about the same rate as Basic Medium. 

Running 3 Basic Medium’s is only slightly more expensive than a single Standard large, but the performance is significantly better.  At 500 concurrent users, the cluster of 3 Basic Medium’s was serving pages at 124 ms while at the same load, a Standard Large was taking 620 ms to serve a page.

Standard Large X 3

Standard large running on 3 instances is massive.  It ran easily with 500 concurrent users with barely any decrease in performance! 


As you can see by the graph, we peaked at almost 40K page views per minute! 

Key Conclusions

In analyzing the performance of all the various hosting plans, here are my key conclusions:

  • Underlying baseline performance makes a big difference in scalability.  Optimizing your page rendering time can allow you to reduce hosting costs by allowing you to run under smaller or fewer instances.
  • The best performing and most economical hosting plan is Shared.  However, with imposed quotas, you have to be careful not to exceed your limits or your site is disabled.
  • Azure SQL runs very fast and scales well.  Even with 500 concurrent users, Azure SQL was never a noticeable bottleneck.
  • Scaling out shared instances also scales out quota limits – 3 Shared instances @ $30 / month might be a better bet than 1 Basic Small at $60 / month.
  • Basic / Standard Small are poor scalability choices – the potential cost savings compared with Medium is eroded quickly by degradation in performance under load.
  • Scaling out (e.g. adding multiple instances) is generally more reliable, higher performing and cheaper than scaling up.  For example, running 3 Basic Mediums provided superior performance to 1 Standard Large and cost is comparable. 
  • Autoscale is only available with Standard and only works horizontally – e.g. you cannot automatically scale up from a Small to a Medium to a Large.

Azure Web Sites as a platform is an incredible option especially for high volume web sites.  It scales well and the various options for hosting plans mean you can can pay as little as $10 / month for your web site.  As you need more capacity, changing the configuration can be done at any time and you pay only for the capacity you are using.   

Read More

Top 10 Predictions for Office 365 / SharePoint in 2015

In reviewing the major changes, rumors and announcements from Microsoft in 2014, here are my top 10 predictions for Office 365 as we move into 2015.

10. New Version of SharePoint On Premise

sp 2015

SharePoint on premise hasn’t been changed much since its release in 2013 while Office 365 has incrementally evolved as part of their “cloud first” strategy.  It is expected that the next version of SharePoint will be announced at the Ignite conference coming up in May, 2015.

Given the number of features in Office 365 that are now cloud only, it’s not clear what will still make it in the next version of SharePoint on premise. 

9. Office 365 Takes Over from SharePoint as the “Portal”

As the various services have evolved within Office 365, SharePoint as the core “portal” experience has been pushed back in favor of the Office 365 experience.   In 2014, we saw the introduction of branding for Office 365 and the introduction of the App Launcher as examples of Microsoft’s go forward strategy to integrate the various portal layers that have evolved in silos into an Office 365 integrated experience.

app launcher 1

8. SharePoint becomes a Back-End Document Repository For Custom Applications

With the introduction of the Office 365 APIs, the new Delve APIs and the evolution of the CSOM based SharePoint APIs, Microsoft is trying to position Office 365 and its various repositories (documents, email, conversations, etc.) to be the backend for custom web and mobile experiences. 

With these new APIs, we no longer need to be in the business of building web parts, server side code, and understanding the nuances of SharePoint development – I just see SharePoint, Exchange, Lync, Yammer, etc. as repositories for me to code against for uploading content, searching for news, etc.  This means that I can build applications using my own tools, my own deployment vehicle and my own user experience. 

7. Better Integration of Azure and Office 365

Azure and Office 365 have evolved in a somewhat isolated fashion, with Azure focused on general cloud hosting and Office 365 focused on collaboration.  However, there is a lot of potential with integrating the new services together and a huge competitive differentiator for Microsoft.

Imagine being able to use Azure ML against Office 365 repositories, imagine being able to automate SharePoint tasks using Azure Batch, and imagine being able to communicate updates from your Hadoop data processing through Office 365 portals.  There are lots of possibilities and while these are all possible using a bunch of custom coding, expect these types of scenarios to get increasingly easier as Microsoft evolves both services.

6. Increasing Isolation of Branding

Microsoft has already recommended against custom branding of SharePoint in Office 365.  Expect this to continue as Microsoft takes further control of the out of the box Office 365 user experience. 


5. New Fit for Purpose Apps instead of a Single Portal

Microsoft released the new Office 365 Video Portal, a specifically designed portal for videos.  In the past, this would have been a web part that fit within the core SharePoint portal user experience but this is clearly limiting the ability for Microsoft to compete with other fit for purpose experiences.  

Expect more of these types of fit for purpose “apps” to take over more and more of the core portal experience provided currently by SharePoint.

4. Microsoft Plays Well with Others

Do you remember the days when the only way to access SharePoint was on a Windows Phone?  In 2014, Microsoft dramatically pivoted and started releasing first class applications on IOS and Android.  Expect this to continue in 2015 as Microsoft has clearly moved to a more open model and recognition that they can no longer own the entire platform and must share with other players.

Excel on iPad


In a similar way, Microsoft has also been integrating on the backend with Salesforce and Dropbox.  Expect more integration with other SAAS providers as Microsoft attempts to be the core integrator of all these disparate services into a unified portal hub.

3. Improvements to Cloud File Storage

One of the key services for Microsoft is OneDrive for Business.  However, compared to other services such as DropBox, it is immature and doesn’t provide the same performance. 

What started as a service with a 25 GB limit for OneDrive for Business is now 1 TB for every user – expect this allocation of storage to continue to increase to unlimited in 2015 (it’s already unlimited for personal users).  Similarly, the 2 GB maximum file size was recently increased to 10 GB for OneDrive for Business.  One of the other major barriers is the 20,000 limit on files that can be synchronized in OneDrive for Business.

As the demand for larger files and the number of files increases, one of the key challenges with OneDrive for Business is performance – it’s too slow to synchronize and not reliable enough to be considered rock solid for replacement of your “H Drive”.  Microsoft has recognized this and has announced changes to the service to improve performance – expect this to evolve in 2015 as customers demand ever increasing sizes of files,

2. A Replacement for InfoPath

Microsoft has promised a replacement for InfoPath when it officially retired it as a platform for creating electronic forms.  A replacement strategy was noticeably absent in last year’s SharePoint conference so expect some further announcements in 2015 around electronic forms and light weight form development within the Office 365 platform.

1. Retirement of SharePoint as a Public Facing Web Site Content management Platform


SharePoint 2013 can act as a decent (although expensive) public facing web site platform.  However, it’s clear that SharePoint cannot keep up with the rapid evolution of the public facing web frameworks, mobile development frameworks and JavaScript libraries continually being updated.  The web itself has changed significantly with the move to a more app centric model and lots of different channels to push web content than just a one size fits all web site.  In addition, Microsoft has been promoting Azure as a public facing web site hosting platform and is promoting platforms such as WordPress as first class web content management solutions because they are much cheaper and agile to host than SharePoint.

Just recently, Office 365 has dropped the free public facing web site feature from its lineup.  While you can continue to build web sites on the SharePoint 2013 on premise platform, its a very expensive platform compared to many other options on the market and with the evolution of the Office 365 APIs, Microsoft is clearly pushing in the direction of SharePoint as a backend content repository but allowing you to build your user experiences on easier and cheaper platforms such as WordPress or other .NET based WCM platforms.

Read More

Microsoft Officially Drops Public Websites from Office 365



As posted previously, there were several rumors floating around the Internet that Microsoft would drop the public facing web site feature from Office 365.

Microsoft has just announced officially that this feature will be dropped from Office 365 starting in January 2015.   

As part of the evolution of the Office 365 service, we periodically evaluate the capabilities of the service to make sure that we’re delivering the utmost value to customers. Today, we’re making a difficult decision to discontinue the SharePoint Online Public Website feature. This lets us then focus on future investments while broadening our partnership with industry leaders.

For customers currently not using this feature, it will no longer be available as of January, 2015.

If you currently use this feature, you will have two years to keep your web site running and then it will shut off.  If you continue to want to run your public web site on the SharePoint platform, Microsoft is recommending you turn to “third party solutions for public website functionality.”  It’s not clear what this means but additional details will be provided in January, 2015.  One can assume this is a reference to offerings such as FPWeb who provide hosting of standard on premise versions of SharePoint 2013, allowing organizations to have the complete control over their SharePoint environment.

Read More

Major Changes Coming to Power BI

Power BI is Microsoft’s cloud based business intelligence platform.  What started out as an add-on to Excel and SharePoint has morphed into its own standalone service. 

Major changes to the platform have been announced this week that have significant impact on how customers will purchase, interact and deploy this platform. 

Power BI vs Power Pivot vs. Power View – Still a Confusing Story

One of the most popular blog posts on this web site is one entitled PowerView vs. PowerPivot vs. Power BI and it describes the key differences between the three products and how they relate to each other.

The new Power BI Preview doesn’t make the explanation easier – in some ways its now more confusing than ever.  It’s not clear whether Power Query, Power View and Power Pivot are sticking around, being retired or being absorbed into the new service.  It’s also not terribly clear whether the current Excel designers (e.g. the Power Query Add-On and Power View Add-On) are going to be absorbed into the new Power BI designer or continue to be maintained as first class products within the Excel context.

New Visualizations Available only in Power BI Preview

In addition to being HTML 5 based, the new Power BI preview comes with a number of new visualizations to add to your dashboards including:

  • Combo Charts
  • Filled Maps
  • Guages
  • Tree Maps
  • Funnel Charts

Power BI is no longer tied to Excel and Office 365

One of the key drivers for moving users to Office 365 was that Power BI as a feature set was only available within Office 365.  Similarly, you needed Excel 2013 as a key authoring tool to create Power View reports and deploy them to SharePoint online.

While Office 365 and Excel will still work well with the Power BI platform, they are no longer required.  Power BI is now independent of both platforms with its own “portal” on and its own designer for building dashboards.  The designer is HTML 5 based which means you could design reports using a basic browser instead of requiring Excel as a desktop client.

Power BI Supports but no Longer depends on Your SQL Server

Power BI is essentially now a pure NoSQL based service.  If you look at the APIs, the data sets you can send into Power BI are JSON based and completely abstracted from traditional SQL databases. 

While Power BI can pull data from SQL Servers, it also supports a variety of other platforms including non-Microsoft based SAAS services.  Expect the list of SAAS services to expand dramatically as preview becomes the full production version.


Expect Deeper Connections to Azure Data Services

Microsoft has been busy announcing a number of new Azure data services including Stream Analytics, Data Factory and updates to HD Insight.  Expect to see connections between all these services with Power BI being the visualization layer as data is processed through these new services.

Power BI will Connect to Your On Premise Data Sources

Power BI has had a method for connecting to your on premise data sources for a while now.  The new preview includes a similar connector for providing access to your on premises databases.  In the new preview, only Analysis Services is currently supported.

Once you have installed the Connector client locally, it broadcasts data from your internal environment up to your Power BI cloud environment.

Power BI for IPAD is coming soon

Microsoft will be releasing a new IPAD app for Power BI that will allow you to render visualizations on your mobile device.  It’s still not clear whether there is an Android version coming as well and whether you’ll be able to design reports or just consume them on your tablet. 


Pricing Hasn’t Been Updated

It’s not quite clear how you pay for the new Power BI service.  In the current model, Power BI is an add-on service to your Office 365 plan or you can purchase it as a standalone subscription.  

However, now that the service isn’t directly tied to SharePoint or Excel, this might change considerably.  For example, Office Online started out in a similar way with a direct connection to Office 365.  However, Office Online is now effectively a freemium service where you can use it with your hotmail account. 

Performance is a Key Outstanding Question

One of the key challenges with the current Power BI and Power View platform is performance, in particular with large datasets.   Power Pivot is a great solution for doing ad hoc analysis with a couple million records, but if you try to load it up with tens of millions of records in Excel you’ll find it slows down considerably.   This is because the PowerPivot model is client cached and dependent on your local machine’s capabilities.  In addition, because it is a client cache model, it needs to refresh the data from the original data source and this can take quite a while depending on the location of the data and how much you need to refresh.

It’s not clear whether the new Power BI uses a similar model or whether its connections to underlying data sources are live.  For example, if I connect Power BI to my data, does it create a local cache of the data or is it running queries against my data source directly?  Similarly, if I connect to my data warehouse on premise, is Power BI going to cache the entire query in the cloud or will it hit my database each time it needs to run an MDX query?

Read More

SharePoint Metadata vs. Folders – Best Practices

We just published to Slide Share a new presentation on best practices on using Metadata instead of Folders in SharePoint 2013.  You can find the presentation here.


In general, we still see lots of clients using folders instead of site columns in SharePoint because they are easier and more ad hoc.   We also find that many of our clients don’t have a fundamental understanding of metadata and taxonomy concepts  before setting up document libraries and therefore default back to a folder structure because it is familiar.

There are lots of good reasons to use SharePoint taxonomy features such as site columns, content types, term store, etc. and this presentation outlines some of those advantages.

Read More