Microsoft’s New GigJam App Takes on Composite Applications

Microsoft announced a new application called Project GigJam which is designed to create that unified portal type presentation layer through an application that can be used as a native composite app with your laptop, tablet or phone.

The basic metaphor is a user designs a “gig” (named for the gig economy) by adding a set of cards.  These cards are presentation views of underlying services, repositories or applications.  The user can link data together across multiple applications – for example, a customer stored in CRM and in SAP.  Once they have created their view the user can share it with colleagues, annotate them, etc.

The user can also hide items that they do not want to share simply by crossing them out:

3048474-inline-redactbeforeinvolving[1]

Co-authoring, co-reviewing and real time collaboration is also built as well as cortana voice integration.

Read More

Building Your Own Micro-Service APIs to Wrap Office 365 or SharePoint 2013 for JavaScript Web Developers

Office 365 and SharePoint provide a variety of APIs that can be accessed and called from JavaScript.  Imagine the following scenario:

image

Microsoft provides a collection of APIs that you can use to fetch data from SharePoint.  These APIs work quite well but are not exactly intuitive to the non-SharePoint developer (e.g. a web developer who lives in JavaScript/AngularJS/Bootstrap/HTML5 all day). 

Imagine that we have a mobile friendly, responsive web application and we want to display the latest news from SharePoint.  Using JavaScript/AngularJS/HTML5 we can build a nice responsive page layout that has a panel that might look something like this:

image3

If I’m writing an AngularJS controller, what I really want is a JSON object that represents the content to be rendered.  The SharePoint API can provide this to me using the search APIs which will provide the content through either REST or JavaScript APIs.  However, there are some SharePoint specific “idiosyncrasies” (kind word for it!) that take some learning to master these APIs.  These include:

  • Understanding all the built in search fields, content types, etc. and their non-intuitive identifiers (for example, did you know that “News” has a content type ID of “ǂǂ416e6e6f756e63656d656e74”?)

  • With the search APIs, you have to ask for fields to be returned such as the body text of the news item. 
  • With search, you can be returned multiple result sets – for example, SharePoint will provide one result set of found pages and another of relevant results.

  • Searching, filtering, and sorting all have APIs that need to be learned in order to retrieve the results required.

This is just scratching the surface – you could spend months figuring out the SharePoint search APIs work, and that’s just for fetching content.  Adding content also has its own learning curve as well.

If you jam all this logic into your JavaScript, you’re putting a lot of complexity into your web developer’s hands that could be insulated away. 

A Proposed Approach: Use a Custom Web API to build a Micro-Service

What if instead of using JavaScript to interact with the API, we built a custom web API to act as an abstracted micro-service?  In this scenario, what our poor web developer wants is something as simple as this:

var responsePromise = $http.get(vm.NewsLatestURL, config, {});
     responsePromise.success(function (dataFromServer, status, headers, config) {
         vm.newsItems = dataFromServer;
     });
     responsePromise.error(function (data, status, headers, config) {
         alert(“Loading NewLatest failed!”);
     });

I ask for a JSON object, and I get one returned.  Simple.  Once I had this micro-service written, I could hook up the JavaScript in about 30 seconds.  More importantly, I didn’t need to know anything about SharePoint at all.

How do we build such an API – we can use ASP.NET Web API as our framework.  In this scenario, I built a custom SharePointController that calls a business component that does all the idiosyncratic SharePoint API work and returns a very simple pre-processed JSON object.

image2

The code for the ASP.NET WEB API controller is very simple because all the processing logic is in a separate service class.

public class SharePointController : ApiController
     {
         [Route(“NewsLatest”)]
         public IEnumerable<SharePointNewsModel> GetLatestNews()
         {
             SharePointNewsService service = new SharePointNewsService(URL, UserName, Password);
             return service.LatestNews();
         }

    }

Everything is mediated through a custom developed API – this makes it ridiculous easy to allow web developers to focus on HTML, JavaScript and CSS instead of needing to trace through the SharePoint APIs.

Additional Advantages to this Approach

There are some additional advantages to this approach vs. going straight at the APIs:

  • Microsoft has a tendency to change their APIs, to add new ones or to move them around.  Using a custom API that you control allows you to insulate and centralize the APIs.

  • Using a custom API opens up the possibility of remapping of security identifies or using a service account type approach instead of always assuming the user is a SharePoint user that has logged into Office 365.
  • If you start with SharePoint on-premise and the move to Office 365, this approach will insulate your code from the migration.
  • You can add all sorts of business logic that is useful and interesting before sending the results back to the page.  For example, you could filter the results, limit the results, reformat the HTML, add additional lookup data, etc.  which all sits within your centralized API.
  • If you decide to dump SharePoint and replace it with something else, you’ve isolated it from your JavaScript.
  • You could introduce performance enhancing caching approaches such as storing results in memory or pre-rendering in order to make fetching the data much faster.  Even logging into Office 365 for example takes a couple seconds so having some method to cache content is going to be important for a high volume or high performance web application.

These are in addition to making your web developers lives simpler and allowing them to treat SharePoint as a content repository instead of understanding the guys of the entire platform.

Read More

Application Insight Monitoring Now Supports Java based Web Apps

Microsoft has just released an SDK for integration the Application Insights Service into Java web based applications.

The Application Insights Service is a full monitoring solution for your web applications built into Azure Web Sites.  It is one of the compelling reasons to use Azure Web Sites over a traditional IAAS based solution.

With the Java SDK you can now instrument your Java based web application code and send telemetry data to Application Insights.  The instrumentation you can sending includes server based method calls, client interactions with your web site, and web tests.  You can also send your web logs to Application Insights for reporting, slicing and dicing.

To add the SDK to your project, you can use Eclipse and the latest version of the Azure Toolkit for Eclipse and you can right click on your project and configure Application Insights from your Java project.   Check out the getting started documentation here.

Application health

Usage Analytics

Metrics Explorer

Read More

Angular 2 Being Developed On TypeScript

TypeScript is an open source framework that sits on top of JavaScript to enable developers to build better JavaScript by supporting class based object oriented programming and static typing.  It is developed by Microsoft but is open source and available on GitHub.  Programs written in TypeScript compile to plain JavaScript.

TypeScript is available as an extension for Visual Studio or as a Node.js package.

Today, it was announced that Angular 2 will be built on top of TypeScript.  Microsoft has been working with Google to merge their efforts.

Google had been developing an independent scripting language framework called AtScript on top of TypeScript.  This will now be abandoned and merged into the core TypeScript library as the two companies work on Angular 2 together!

Read More

How to use Microsoft Project Efficiently for Building Plans

As consultants, we have to develop estimates and the number one tool we use is Microsoft Project.  Over the years, I have developed a set of best practices for using Microsoft Project that will make building estimates in particular more efficient. 

De-clutter Your Workspace

When Project 2013 creates a new project, the default view includes a timeline and a Gantt chart.    Remove both of these views because for estimation, these aren’t particularly helpful and clutter the screen.

In addition, I would recommend removing the Indicators and Task Mode columns.

image

If you’re working with budgets, I would add in the Cost column for sure so you can see your costs changing as you plan.  I would also add in the Work column so you can see the hours.

Here is the de-cluttered version of the default project view.

image

Much simpler, easier to see the tasks and much faster for data entry purposes.

Split Your Window

Now that you have removed the timeline and Gantt chart, you can split your window to have the Task Form on the bottom pane and your task list on the top.  To split your window, just drag up the window from the scroll bar on the right.  If you right click on the Task Form you can change the view – I always use the Resources and Predecessors view.

image

This view makes allocations of resources and predecessors much easier than typing them directly into the cell.  You can adjust each allocation and instead of typing predecessors as numbers you can pick them from a drop down list. 

Configure Your Project from the Start

There are a number of settings that govern your project plan as a whole.  While you can change these at any time, if you set these up from the start you will find Microsoft Project is a lot faster and easier to use. 

Project Start Date

By default, Project sets the project start date to today’s date.  It seems obvious, but setting the project start date to an appropriate date will ensure that your project plan flows appropriately.

  image

Working Time

Setting the Working Time is important for two key reasons – it allows you to add in vacation days, holidays, etc. and it sets how long the day is for your plan.  By default, a day in Project is 8 hours per day.  Many organizations work on a 7.5 or 7 hour day instead – I recommend changing this before you start plotting out tasks. 

image

Switch to Auto Scheduling

By default, Project sets all new tasks to be manually scheduled.  However, in most cases you want tasks to be automatically scheduled so that Project manages your schedule for you instead of you plotting out dates manually.   For estimates, don’t use manually scheduled tasks if you can avoid it – always use automatically scheduled tasks.

You can change the settings for newly created tasks by going to File –> Options –> Schedule and setting the New Tasks Created to “Auto Scheduled”.

image

You can also switch to auto scheduling by clicking on the New Tasks: Manually Scheduled button in the bottom left hand corner.

image

Change this to auto schedule.

Pick a Task Type and Stick With It

Microsoft Project has three types of tasks that you can use.

  • Fixed Units: units are fixed.  If you change the work value, the duration is recalculated but units stay the same.
  • Fixed Duration: duration is fixed.  If you change the work, the units are recalculated.  If you change the duration, the work is recalculated.
  • Fixed Work: work is fixed.  If you change the work, the duration is recalculated.  If you change the units, duration is recalculated. 

By default, any new tasks are created as “Fixed Units”, which in my experience is the least intuitive of the three options. 

You can change this by going to File –> Options –> Schedule.

image

My recommendation is actually to use “Fixed Duration” and to check off the box “New Tasks are Effort Driven”.   Here is why I personally find it easier.

When I plan out a project, I don’t think to myself – I need 83 hours of work and I have 1.5 developers to do the work.  This is what the approach for “Fixed Work” assumes – it assumes you know the raw number of effort hours to plot out.  Instead, when I estimate out a project, I think to myself, “I need 2 developers for 2 weeks” or “I’m going to allocate a project manager for 30% of his/her time for duration of the project”.  Fixed Duration works MUCH better for these types of scenarios because it is inherently schedule driven and not work effort driven.  Work is calculated as an output – e.g. if you allocate 2 people x 2 weeks x 50% with 8 hour days the work is automatically calculated to be 80 hours. 

If you’re the type of person who thinks in effort hours, then use Fixed Work as your task type. 

The worst practice in my experience is having a project plan that uses multiple tasks types – this can cause errors in your plan and makes it hard to follow.  Set the default task type and stick to it consistently.

Highlight Your Critical Path

One of the key challenges with large and complex project plans is tracing through the critical path and ensuring that all your dependencies are included so that the path flows from the beginning to the very end of your project plan.

One easy usability enhancement you can do is to highlight critical path tasks.  You can change the text style of the critical path tasks – for example, you could change them to red and bold.  To do this, you click on Format –> Text Styles and then select Critical Tasks from the Item to Change dropdown.  You can then change the color, font, size, etc. for those critical tasks.

image

You can also highlight critical tasks in the Gantt chart as well by clicking on Format and then clicking the Critical Tasks checkbox.

image

Keep Your Dependencies Simple and Modular

A project plan is a set of tasks organized as blocks.  For example, the out of the box Software Development plan in Project is a series of modular blocks of tasks.image

In general, I always start with the summary tasks first and create dependencies between them instead of creating relationships between tasks.  By using this approach, you can abstract what is inside each module and change it around without breaking the dependency. 

This helps with moving around sprints, re-organizing complex plans, etc.  If I can keep the dependencies limited to the modules and not individual tasks, I can re-organize them as a set of blocks instead of having to re-organize each task and check every dependency.

Allocating Overhead

Every project has specific tasks (building something, testing something, etc.) and then an amount of work to manage the project.  “Management” in this case could include project management, architecture, operational support, etc. as pools of hours allocated to the project and not to a specific task.

In theory, you could add these overhead tasks to every single task as a resource allocation but this becomes impractical quickly.  Instead, what we do is create a specific task for overhead tasks such as Project Management, Technical Architecture, etc.  – any role that isn’t tied to a specific task.

This is where the Fixed Duration task helps a lot – in most cases, it’s conceptually easier to think of allocating a project manager or architect across the duration of a project or phase at % allocation.  For example, for a large project, it might require a full time architect or even multiple architects for the duration of the project to supervise the development team.  When you estimate this, do you think “I need 542 hours of architecture time” or is it more practical to estimate with a model like “I need 50% of an architect for 6 weeks”?  I find the later is much easier to model.  We can create a task that encompasses the duration of the project or phase and then just allocate the resources appropriately.

Adding Contingency

Adding contingency is unfortunately not easy in Project and there are multiple types of contingency – hours, time, resources, cost, etc.  There are a few ways we add contingency to our projects:

  • You can use a custom formula to add a % to your cost.  For example, you could create a Cost with Contingency column and make it equal to Cost * 1.10.  This works well for cost calculations but doesn’t impact hours or time.
  • For schedule contingency, you can create tasks with no resources and just have them as artificial delays in the project plan – these don’t increase your cost but allow for schedule slippage.
  • Another schedule contingency approach I have seen used is to use a 7 hour calendar when you have an 8 hour day.  This effectively provides an additional hour per day in schedule slippage.  However, again this will increase time while not increasing cost or hours.
  • To add hours and then impact the schedule and cost accordingly, the only way to do this is to add hours to your plan, either as a separate task or baked into the existing tasks. 

For macro estimates, I like the first approach the best because it keeps the project plan lean while allowing you to change contingency globally to increase the cost.  This is particularly useful for fixed cost projects.  However, this also means that you may have an unrealistic schedule or need additional resources if the contingency is actually needed because the hours are not in the plan. 

Read More

ASP.NET 5 will be Leaner and More Modular

The new ASP.NET 5 framework coming from Microsoft has some significant architectural changes that make it more modular, cross-platform and leaner than previous versions.  The following are some of the key new features for developing light weight web based applications using the new ASP.NET 5 framework.

Use the Full .NET CLR, Core CLR or Cross Platform CLR

One of the key changes in ASP.NET 5 is the ability to choose from full .NET CLR, a more streamlined Core CLR or a cross-platform CLR.  The Core CLR is only 11 megabytes compared to the full .NET CLR at 200 megabytes.  Instead of requiring the entire framework, you can add the specific libraries you need through NuGet packages. 

The cross-platform CLR will target Windows, OS X and Linux. 

Embed the CLR with Your Application

In the current and older versions of the .NET framework, you deployed the CLR as a global installation.  If you upgraded the framework, it could impact every application running on the server. 

In the new ASP.NET 5 architecture, you can run different versions side by side and you can embed the Core CLR and any dependent packages with your application. 

No More Web.Config!

One of the key advantages of ASP.NET 5 is a replacement for the reliable but inflexible Web.Config.  You can now store configuration information in variety of formats including JSON and XML.

Host Anywhere!

One of the key dependencies for any ASP.NET application is IIS.  While IIS will still be the default method to host an ASP.NET application, you can now target other hosting environments.  For example, the ASP.NET 5 framework provides its own console app called WebListener that can host ASP.NET applications without the need for IIS.

For more information on ASP.NET 5 see the ASP.NET vNext web site

Read More

Deploy to Azure from Git Repository

Microsoft has just added the ability to deploy directly to an Azure Website from a Git repository.  All you have to do is place the following into your README.md file:

[![Deploy to Azure](http://azuredeploy.net/deploybutton.png)](https://azuredeploy.net/)

You can also use standard HTML tags to provide your own button like this:

<a href="https://azuredeploy.net/" target="_blank">
    <img src="http://azuredeploy.net/deploybutton.png"/>
</a>

 

The following video describes the process in more detail.

Read More

.NET Platform Goes Open Source and Cross-Platform

A number of announcements came out today from Microsoft on the evolution of the Visual Studio, .NET and ASP.NET platform. 

.NET Core Runtime is Going Open Source and Cross-Platform

Microsoft has been moving more of its code to open source frameworks including ASP.NET, Entity Framework, Web API and the C# and VB compilers.

Microsoft announced that the core .NET runtime and libraries are also now going open source.  This includes the CLR, Just-in-Time Compiler, Garbage Collector and core .NET libraries.

In addition, Microsoft is going to release an official distribution of .NET for Linux and OS X.  This will enable any developer to write .NET applications that run equally well on Windows, Linux or OS X. 

New Visual Studio Community Edition

Microsoft is going to release a new version of Visual Studio for free called the Community Edition. 

It is now available completely free for:

  • Any individual developer working on a commercial or non-commercial project
  • Any developer contributing to an open source project
  • Anyone in an academic research or course setting (e.g. students, teachers, classroom, online course)
  • Any non-enterprise organization with 5 or fewer developers working on a commercial/non-commercial project together

We are making it available for download starting today, and developers can download and start using it immediately.  There is no program you need to join to use it – simply visit www.visualstudio.com, click the download button, and you are good to go.

New Visual Studio 2015 Preview with New ASP.NET Features

A new preview version of Visual Studio 2015 has been released with a bunch of new ASP.NET features.

The new ASP.NET features available include new project templates, faster build times, support for xUnit tests, improved intellisense, etc.

JSON Editor Improvements

In addition to new preview versions of Visual Studio 2015, there has also been released a new update to Visual Studio 2013.  Included in this package is a new update are improvements to the JSON Editor including JSON schema validation, improved intellisense and duplicate property validation.

New Emulator for Android

Microsoft has also now released a new emulator for Android built into the Visual Studio 2015 Preview environment.

clip_image001

Read More

Azure Resource Manager Tools Now Available in Preview

Microsoft has been working on a new set of tools for Visual Studio that will allow you to define a complete cloud deployment model for custom applications that you build in Visual Studio.  For example, you can create a custom ASP.NET MVC application that uses SQL Server as its back end database as a project in Visual Studio today, but the new Azure Resource Manager will allow you to also define the cloud infrastructure components that are required for your custom application using Azure Gallery templates. 

The current toolset allows you to create only new types of “Cloud App” – a plain ASP.NET web site and a ASP.NET web site with SQL Server. 

websitetemplate

Additional templates will be added for other common scenarios.

The cloud app solution essentially creates a set of script files for deploying your environment within Azure.  The scripts are in a combination of JSON templates and PowerShell scripts.  The scripts are fully parameterized and you can create multiple “Resource Groups” to deploy, e.g. different environments for dev, test, prod, etc.

editparam

Once you have configured your resource group, you can publish your application to Azure through Visual Studio or through PowerShell.

Target

clip_image018

Read More

Exploring The Evolving Real Time Web using Signal-R

In the traditional web, a web page is inherently NOT in real time.  When you request a web page, it sends a request to the web site and sends back the results.  Once your page is displayed, its now fundamentally out of date until you hit the refresh button and reload the page. 

As front end technologies such as HTML 5, Javascript, JQuery, etc. become more sophisticated, web developers have started enabling a more real time experience that supports sharing, collaborating and interacting in real time with both the server providing information and other clients. 

A simple example is Twitter – if you leave your twitter page up in your browser, you’ll periodically see these messages in real time as new tweets arrive:

image

Why is the Real Time Experience an important evolution from traditional web metaphors? 

  • Discussion boards become chat rooms
  • Document libraries become co-authoring
  • Reports become interactive dashboards
  • Activity walls become real time activity feeds
  • Notifications after the fact become real time alerts
  • Web “pages” become applications
  • Pictures become real time animations

As we add multiple devices all working together to respond to events, having each one having to poll the server to receive updated information becomes a broken metaphor – we need to be able to PUSH messages out to clients instead.  The server becomes a message broker instead of a message generator as clients send each other messages. 

Microsoft has developed a framework for developing real time web applications called Signal-R.  Version 2.0 of the framework has been out since the fall of 2013 and there is an excellent tutorial site here.  It provides a framework for allowing for continual remote function calls between clients and servers using .NET and JavaScript.

Invoking methods with SignalR

The connection between client and server is persistent (e.g. real time) and allows you as a developer to build applications that can push messages to a set of connected clients.  The protocols used to push these messages vary depending on the browser and the framework will automatically upgrade or downgrade the protocol based on what the browser can support. 

The other key component to making the real time web work are JavaScript frameworks that can update the screen based on the incoming information pushed from the server.  Frameworks such as JQuery, JQuery.UI, etc. 

There are lots of JavaScript libraries that could be used as front end rendering layers for presenting data in real time.  Here are a few examples:

  • Heatmap.js provides rendering of heatmaps based on incoming data.  Imagine rendering a heatmap based on a group of people’s clicks, mouse movements, or other signals in real time?
  • Data-Driven Documents: provides rendering of datasets – imagine these beautiful renderings being updated in real time based on data pushed from the server?
  • Cubism.JS: provides rendering of time series in real time.
  • Raphael.JS: provides rendering of vector graphics using simple JavaScript.  This could be a very nice animation library for use in real time massive collaborative applications.
  • Paper.js: provides a full vector graphics scripting framework based on HTML 5 canvas.

Here are some good examples of real time web applications that could be enabled using Signal-R and other real time web frameworks.

JABBR

  • Real time chat application using Signal-R as the framework. 

image

Office Web Apps

The new Office Web App supports co-authoring of documents in real time.

Murally

  • Group collaboration using virtual sticky notes, images and activities in real time.

LucidChart

  • Group collaboration on development of flowcharts
Example of Lucid Chart

Read More