For SharePoint Development, the Future (and Present) is Clearly JavaScript

We have been working on a custom branded SharePoint intranet and interviewing SharePoint developers as we expand our team.  The historical SharePoint developer who has been working with C#, ASP.NET, farm based solutions, etc. is being clearly replaced by developers who see SharePoint and Office 365 as just another web based JavaScript platform.  When we interview SharePoint developers, we’re asking them about their experience with JavaScript frameworks, CSS styling, MVVM modular design and working with JSOM, REST, etc.  Right now, we’re hiring folks who have experience with CSS, HTML and JavaScript frameworks such as Knockout JS, Angular, React, etc. because this is the type of customizations we are developing.  We haven’t done a lot of C# development lately, except for clients who are still running older versions of SharePoint and want to migrate to SharePoint 2016 but keep their existing farm based solutions, custom workflows, InfoPath forms, etc.

Microsoft has been encouraging this shift from server side to client side development for several years, starting with SharePoint 2010 and dramatically accelerated with Office 365.  When Office 365 banned farm based solutions and server based code, the community shifted to 100% client side development using JavaScript, HTML, CSS, etc.

In SharePoint 2016, you can still use C# or ASP.NET to create farm based solutions but this is primarily for backwards compatibility.  In order to support both Office 365 and SharePoint 2016, it is recommended to use client side based customizations only.

SNAGHTML37902c8c

In the upcoming SharePoint Framework, Microsoft is pushing even further into the JavaScript world.  Gone are web parts / app parts / add-ins as IFrame component containers, replaced by inline JavaScript extensions that are dynamically included in the page.  These JavaScript extensions can be deployed either to Office 365 or loaded from an external CDN.  The SharePoint Framework leverages 100% JavaScript code and tools such as TypeScript, Node.JS, React, JSON, and Gulp for building customizations.  Unlike the current Add-in model, the code is included directly into a page with none of the older IFrame based container frameworks that older versions of SharePoint used. 

The SharePoint framework-an open and connected platform 3

If you are one of these SharePoint/C# developers who still thinks JavaScript is an after-thought language, think again – it should now be considered a first class language.  Microsoft is pushing it hard and SharePoint development is already primarily client side development and will continue to be in the future. 

Read More

Paging through SharePoint 2013 / Office 365 Lists with JavaScript

One of they most basic requirements for customizations in SharePoint 2013 is displaying lists of items.  For example, you might want to have a list of news items in which you control how that list is rendered in the user interface.  There are several ways to do this including search based display templates, CSOM, JSOM, REST, etc.  We have been using all of these approaches in our custom intranets.

One basic requirement we had was to implement a paging system so that end users could click previous, next and randomly seek to any page.

image

Here is how to implement such a mechanism using JavaScript. 

Two Basic Approaches: Fetch Items from a List or Fetch Items from Search

Using the SharePoint JavaScript APIs, there are two basic approaches to obtain a list of items: 1) query the list where the items are stored directly or 2) query the search index to find items. 

Here is an example for fetching items directly from a news list using the JavaScript API.  The query executes a CAML query against a specific news list with a row limit of 2 items per page.

var queryText = “*”;
var rowsPerPage = 2;
var startRow = 1;
var id = “”;
var contentTypeID = “0x0110”;
var category = “”;


var pathField = “Path”;
var titleField = “Title”;
var bodyField = “BodyOWSMTXT”;
var publishedDateField = “PublishedDateOWSDATE”;


var loadNews = function () {
     $(document).ready(function () {
        SP.SOD.executeFunc(‘sp.js’, ‘SP.ClientContext’, function () {
           
                 var ctx = new SP.ClientContext.get_current();
                 var oWeb = ctx.get_web();
                 var oList = ctx.get_web().get_lists().getByTitle(‘Pages’);
                 var viewFields = “<ViewFields><FieldRef Name=’Title’ /></ViewFields>”;
                 var orderBy = “<OrderBy><FieldRef Name=’Created’ /></OrderBy>”;
                 var where = “”;
                
                 category = manageQueryStringParameter(“category”);
                 if (category != “”)
                 {
                     where = “<Query><Where><Eq><FieldRef Name=’News_x0020_Category’ /><Value Type=’TaxonomyFieldTypeMulti’>” + category + “</Value></Eq></Where></Query>”;
                 }
                 var rowLimit = ‘<RowLimit Paged=”TRUE”>’ + rowsPerPage + ‘</RowLimit>’;
                
                 var viewXML = “<View>”+ where + rowLimit + “</View>”;
                
                 var camlQuery = new SP.CamlQuery();
                 camlQuery.set_viewXml(viewXML);


                var collListItem = oList.getItems(camlQuery);
                 ctx.load(collListItem);
                 ctx.executeQueryAsync(onQuerySuccess, onQueryFail);


                function onQuerySuccess() {
                        
                     var listItemInfo = ”;
                     var listItemEnumerator = collListItem.getEnumerator();
                     var firstPageID = null;
                     var lastPageID = null;
                    
                     $(‘#list’).append(“<ul>”);
                            
                     while (listItemEnumerator.moveNext()) {
                        
                         var oListItem = listItemEnumerator.get_current();


                        // set firstPage ID for the item found
                         if (firstPageID == null)
                             firstPageID = oListItem.get_id();
                            
                         listItemInfo = ‘\nID: ‘ + oListItem.get_id() + ‘\nTitle: ‘ + oListItem.get_item(‘Title’);
                         $(‘#list’).append(“<li>” + listItemInfo + “</li>” );
                        
                         // set lastPageID for last possible item
                         lastPageID = oListItem.get_id();
                     }
                    
                        $(‘#list’).append(“</ul>”);
                       
                    }


                function onQueryFail(sender, args) {
                     alert(‘Query failed. Error:’ + args.get_message());
                 }


            });
         });
};
loadNews();


// pull parameters from query field
function manageQueryStringParameter(paramToRetrieve) {
     var queryValue = “”;


    if (document.URL.indexOf(“?”, 0) > 0) {
         var params = document.URL.split(“?”)[1].split(“&”);
         var strParams = “”;
         for (var i = 0; i < params.length; i = i + 1) {
             var singleParam = params[i].split(“=”);
             if (singleParam[0] == paramToRetrieve) {
                 queryValue = singleParam[1];
             }
         }


    }
     return queryValue;
}

Here is an example of the same query but instead of querying the list, we query the search index instead.

var queryText = “*”;
var rowsPerPage = 2;
var startRow = 1;
var id = “”;
var contentTypeID = “0x010100C568DB52D9D0A14D9B2FDCC96666E9F2007948130EC3DB064584E219954237AF3900242457EFB8B24247815D688C526CD44D008B00E60AACCFA944AC4F0B4704E594A4”;
var category = “NewsCategoryChoiceOWSCHCM”;


var titleField = “Title”;
var publishedDateField = “ArticleStartDate”;
var categoryField = “NewsCategoryChoice”;
var category = “”;
var categoryURLParameter = “category”;


var loadNews = function () {
     $(document).ready(function () {
         SP.SOD.executeFunc(‘sp.js’, ‘SP.ClientContext’, function () {
             SP.SOD.executeFunc(“SP.Search.js”, “Microsoft.SharePoint.Client.Search.Query.KeywordQuery”, function () {
           
                 var ctx = new SP.ClientContext.get_current();
                 var oWeb = ctx.get_web();
                 var contextSite = ctx.get_site();
                 var keywordQuery = new Microsoft.SharePoint.Client.Search.Query.KeywordQuery(ctx);
                
                 keywordQuery.set_queryText(“*”);
                 keywordQuery.set_trimDuplicates(true);
                 keywordQuery.set_rowLimit(rowsPerPage);
                
                 var querytemplate = “ContentTypeId:'” + contentTypeID + “*’ “;
                
                 category = manageQueryStringParameter(categoryURLParameter );
                 if (category != “”) {
                     querytemplate += categoryField + ‘:”‘ + category + ‘” ‘;
                 }
                
                 keywordQuery.set_queryTemplate(querytemplate);
                
                 var properties = keywordQuery.get_selectProperties();
                 properties.add(publishedDateField);
                 properties.add(titleField);
                 properties.add(categoryField);


                var searchExecutor = new Microsoft.SharePoint.Client.Search.Query.SearchExecutor(ctx);
                 var results = searchExecutor.executeQuery(keywordQuery);
                 ctx.executeQueryAsync(onQuerySuccess, onQueryFail);


                function onQuerySuccess() {
                        
                     $(‘#list’).append(“<ul>”);
                                                    
                     for (i = 0; i < results.m_value.ResultTables[0].ResultRows.length; i++)
                     {
                         var row = results.m_value.ResultTables[0].ResultRows[i];
                         listItemInfo = ‘\nTitle: ‘ + row[titleField];
                         $(‘#list’).append(“<li>” + listItemInfo + “</li>” );
                     }
                    
                        $(‘#list’).append(“</ul>”);


                     
                    }


                function onQueryFail(sender, args) {
                     alert(‘Query failed. Error:’ + args.get_message());
                 }


             });
         });
     });
};


loadNews();


// pull parameters from query field
function manageQueryStringParameter(paramToRetrieve) {
     var queryValue = “”;


    if (document.URL.indexOf(“?”, 0) > 0) {
         var params = document.URL.split(“?”)[1].split(“&”);
         var strParams = “”;
         for (var i = 0; i < params.length; i = i + 1) {
             var singleParam = params[i].split(“=”);
             if (singleParam[0] == paramToRetrieve) {
                 queryValue = singleParam[1];
             }
         }


    }
     return queryValue;
}

As you can see by the differences in the function, the basic approach is similar but in this case we execute a query using the SharePoint search APIs instead of the list APIs. 

Implementing Previous and Next using SharePoint List Query

It is possible to implement a previous and next paging approach using our list query approach.  The approach requires the following code to be added to our CAML query creation above:

var pagingInfo = ‘Paged=TRUE&p_ID=0’;
var page = manageQueryStringParameter(“p_ID”);
var pagedPrev = manageQueryStringParameter(“PagedPrev”);
if (page != “”)
{
     pagingInfo = ‘Paged=TRUE&p_ID=’ + page;
}
if (pagedPrev == “TRUE”)
{
     pagingInfo = ‘Paged=TRUE&p_ID=’ + page + ‘&PagedPrev=TRUE’;
}

var position = new SP.ListItemCollectionPosition();
position.set_pagingInfo(pagingInfo);

camlQuery.set_listItemCollectionPosition(position);

This code retrieves an ID value of either the last displayed item in the case of Next or the first display item in the case of Prev in order to tell SharePoint where to count backwards or forwards.  In addition, you have to set the pagingInfo to include a PagePrev=True attribute for when your user has clicked the previous button. 

The id value is provided when you iterate through your query’s results.  The basic code I have tracks the first item’s ID and the last item’s ID and appends these parameters to the p_ID parameter to pass into the query string:

while (listItemEnumerator.moveNext()) {
                        
     var oListItem = listItemEnumerator.get_current();


    // set firstPage ID for the item found
     if (firstPageID == null)
         firstPageID = oListItem.get_id();
                            
     listItemInfo = ‘\nID: ‘ + oListItem.get_id() + ‘\nTitle: ‘ + oListItem.get_item(‘Title’);
     $(‘#list’).append(“<li>” + listItemInfo + “</li>” );
                        
     // set lastPageID for last possible item
     lastPageID = oListItem.get_id();
}
                    
$(‘#list’).append(“</ul>”);
                       
$(‘#pages’).append(“<a href=’./news-list?category=” + category + “&p_ID=” + lastPageID + “‘>Next</a>”);
$(‘#pages’).append(“<a href=’./news-list?category=” + category + “&p_ID=” + firstPageID + “&PagedPrev=TRUE’>Prev</a>”);

This works well for previous and next.  However, there is no easy method to implement the random access to specific pages, e.g. by click on page 3 to go directly to page 3.  Previous and Next work relative to the current position in the list but the API doesn’t provide the total number of items in the list that we could use to calculate how many pages we need.  In addition, the IDs are not necessarily in sequential order which we would need to figure out what the ID to specify to go to a specific page. 

Implementing Pages Using SharePoint Search

The SharePoint Search API allows us to implement pages because of two important features:

  • The SharePoint Search API provides a value for the total number of items in the query even when the Row Limit is specified.  For example, your news list might have 200 items in it but you only want to display five items per page.  Unlike the SharePoint List API, the Search API provides the value of 200 in the TotalRows property of the table of results.
  • The SharePoint Search API provides a method called set_startRow which allows you to specify the starting position of the results provided.  Unlike the List API as described above, the Search API’s positions are linear and sequential.

With these two pieces of information, we can implement Previous, Next and calculate the position of each page.  To fetch the page, we add it to the query string and set the start row like this:

page = manageQueryStringParameter(pageURLParameter);
if (page != “”) {
     keywordQuery.set_startRow(page * rowsPerPage);
}

The start row will be the position in the search results based on the current page and the number of results you’re displaying per page.  Since startRow starts at zero, page 1 should be page 0, page 2 should be page 1, etc. 

Calculating previous, next and each page is now straightforward by simply setting the page in the query parameter.  Since we know the total number of rows and the current page, we can enable/disable the previous and next links and create links for each page.

// calculate paging                    
var totalRows = results.m_value.ResultTables[0].TotalRows;
if (page == “”)
     page = 0;
else
     page = parseInt(page);
                    
if (page > 0)
{
       var prevPage = page -1;
     $(‘#pages’).append(“<a href=’./news-list?category=” + category + “&page=” + prevPage + “‘>Prev</a>”);
}                   


var totalPages = totalRows / rowsPerPage;
for (i = 0; i<totalPages; i++)
{
       // count with a 0 but display as a 1
       var pageDisplay = i + 1;
       $(‘#pages’).append(“<a href=’./news-list?category=” + category + “&page=” + i + “‘>” + pageDisplay  + “</a> | “);
}   


if ((page + 1) * rowsPerPage < totalRows)
{
     var nextPage = page + 1;                       
     $(‘#pages’).append(“<a href=’./news-list?category=” + category + “&page=” + nextPage + “‘>Next</a>”);
}

One important note on the total rows property – it can be an estimated total.  The SharePoint Search API provides estimates if the search results are large (e.g. you have thousands of items coming back from the search).  There is a property on the result table called IsTotalRowsExact which you can check to see if the total rows property value is exact or an estimate.

Read More

Power BI Visualization Goes Open Source, Uses JavaScript, HTML 5 and D3.JS

Power BI Visualizations are going Open Source!  Microsoft has published and will maintain all its code for the visualization layer of Power BI on GitHub

We’re enabling developers to easily add custom visuals into Power BI for use in dashboard, reports and content packs. To help you get started, we’ve published the code for all of our visualizations to GitHub. Along with the visualization framework, we’ve provided our test suite and tooling to help the community build high quality custom visuals for Power BI. All of this is available as an open source project on GitHub.

This means that developers will be able to contribute customizations back into the repository as well as potentially extend their local implementations using custom visualizations. 

It’s not quite clear how these extensions or customizations will be enabled within a client implementation – for example, can you build your own “Power BI Apps” that extend the framework?  Can you deploy customizations to your own Power BI tenant?  Could you create visualizations that are distributed through some kind of app store?  These are all options in the Office 365 world that don’t really exist yet in the Power BI world but look like interesting future possibilities.

The other key shift is the use of open source JavaScript frameworks.  All of Power BI’s visualizations are apparently built on top of D3.js which is an open source visualization framework that uses JavaScript for rendering visualizations within the browser.  There are some really neat D3 visualizations out there in the world that could be re-purposed within the Power BI context.

Read More

Differences Between URL and Publishing Image when Developing SharePoint Apps with JavaScript

We’re building a basic slider App Part in SharePoint using all client side code.  We created an App Part for SharePoint and constructed our slider using some publically available JavaScript libraries.

image

The App Part pulls the data from a list mapped by site columns in the App Part’s properties.  Our list has a title, description, and background property to populate the slider’s content. 

There are a couple different methods for storing the image:

  • Custom site column with a URL type
  • Add an existing column such as Page Image or Rollup Image which are of type Publishing Image (these are added to SharePoint when you turn on the publishing features)

There are some significant differences between the two for the end-user supplying the image:

image

The Page Image user interface is definitely better – instead of forcing the end-user to supply the URL to the image, it has a link to the default Site Collection Images library and the user can pick the image from the gallery.  The Page Image also supports Image Renditions which automatically convert images to standard sizes.

image

If you’re writing code that dynamically supports both types of site columns, you’ll run into challenges because parsing out a URL is different than parsing a Publishing image.

The first step is figuring out which site column type you’re dealing with when you fetch the data.  You can query the properties of the list itself and get the field type of the column. 

function schema(listName) {
     this.listName = listName;
     this.fields = [];
     this.addField = function (field) {
         this.fields.push(field);
     };
     this.getField = function (fieldName) {
         var matchingField = null;

        for (i = 0; i < this.fields.length; i++) {
             if (this.fields[i].name == fieldName) {
                 matchingField = this.fields[i];
             }
         }
         return matchingField;
     };
     this.loadSchema = function () {
         hostWebUrl = decodeURIComponent(manageQueryStringParameter(‘SPHostUrl’));
         appWebUrl = decodeURIComponent(manageQueryStringParameter(‘SPAppWebUrl’));

        // create an object to wait on until the Async method below returns
         var deferred = $.Deferred();
         var ctx = new SP.ClientContext(appWebUrl);
         var appCtxSite = new SP.AppContextSite(ctx, hostWebUrl);

        var web = appCtxSite.get_web(); //Get the Web
         var list = web.get_lists().getByTitle(listName); //Get the List
         var fields = list.get_fields();
         ctx.load(fields);
         //Execute the Query Asynchronously
         ctx.executeQueryAsync(
             Function.createDelegate(this, function () {

                var enumerator = fields.getEnumerator();

                while (enumerator.moveNext()) {
                     var listfield = enumerator.get_current();
                     var fieldType = listfield.get_fieldTypeKind();
                     var fieldName = listfield.get_internalName();
                     var field = new schemaField(fieldName, fieldType);
                     this.addField(field);
                 }
                 deferred.resolve(“Success”);
             }),
         Function.createDelegate(this, function () {
             addErrorMessage(“Operation failed  ” + arguments[1].get_message(), 1);
             deferred.reject(“Operation failed”);
        }));
         return deferred.promise();

    };
}

This piece of code takes a list and creates a schema that is stored in memory that describes each of the columns.  This function provides a schema that will allow you to look up the site column in memory and figure out the type. 

SharePoint provides an enumeration called SP.FieldType that you can compare your site column with to figure out its type.  URL, for example, has a field type of 11 and matches to SP.FieldType.URL.

When you query the field type of a Publishing Image, the field type is 0 – this means it’s an “invalid” field type.  Publishing Image isn’t recognized as part of the core standard because its part of the publishing feature infrastructure. 

When you try to pull the value of the field in JavaScript (you’ll see similar behavior in the C# or REST APIs), there are some significant differences:

  • When you retrieve a URL in JavaScript, it comes as a specialized URL object that has a get_url() method for fetching the URL.  This behavior is unique to URL fields.
    url = currentListItem.get_item(imageField).get_url();
  • When you retrieve a Page Image in JavaScript, the field is treated like a normal field in that you can fetch the value with the call:
    url = currentListItem.get_item(imageField);
    However, the format of the content isn’t the URL but the entire image tag, e.g. <img src=…

If you try to call get_item on a URL field, you’ll get an error.  If you’re writing code that needs to fetch the URL out of a Publishing Image field, you’ll have to parse out the image tag to just grab the src attribute.

Read More

Building Your Own Micro-Service APIs to Wrap Office 365 or SharePoint 2013 for JavaScript Web Developers

Office 365 and SharePoint provide a variety of APIs that can be accessed and called from JavaScript.  Imagine the following scenario:

image

Microsoft provides a collection of APIs that you can use to fetch data from SharePoint.  These APIs work quite well but are not exactly intuitive to the non-SharePoint developer (e.g. a web developer who lives in JavaScript/AngularJS/Bootstrap/HTML5 all day). 

Imagine that we have a mobile friendly, responsive web application and we want to display the latest news from SharePoint.  Using JavaScript/AngularJS/HTML5 we can build a nice responsive page layout that has a panel that might look something like this:

image3

If I’m writing an AngularJS controller, what I really want is a JSON object that represents the content to be rendered.  The SharePoint API can provide this to me using the search APIs which will provide the content through either REST or JavaScript APIs.  However, there are some SharePoint specific “idiosyncrasies” (kind word for it!) that take some learning to master these APIs.  These include:

  • Understanding all the built in search fields, content types, etc. and their non-intuitive identifiers (for example, did you know that “News” has a content type ID of “ǂǂ416e6e6f756e63656d656e74”?)

  • With the search APIs, you have to ask for fields to be returned such as the body text of the news item. 
  • With search, you can be returned multiple result sets – for example, SharePoint will provide one result set of found pages and another of relevant results.

  • Searching, filtering, and sorting all have APIs that need to be learned in order to retrieve the results required.

This is just scratching the surface – you could spend months figuring out the SharePoint search APIs work, and that’s just for fetching content.  Adding content also has its own learning curve as well.

If you jam all this logic into your JavaScript, you’re putting a lot of complexity into your web developer’s hands that could be insulated away. 

A Proposed Approach: Use a Custom Web API to build a Micro-Service

What if instead of using JavaScript to interact with the API, we built a custom web API to act as an abstracted micro-service?  In this scenario, what our poor web developer wants is something as simple as this:

var responsePromise = $http.get(vm.NewsLatestURL, config, {});
     responsePromise.success(function (dataFromServer, status, headers, config) {
         vm.newsItems = dataFromServer;
     });
     responsePromise.error(function (data, status, headers, config) {
         alert(“Loading NewLatest failed!”);
     });

I ask for a JSON object, and I get one returned.  Simple.  Once I had this micro-service written, I could hook up the JavaScript in about 30 seconds.  More importantly, I didn’t need to know anything about SharePoint at all.

How do we build such an API – we can use ASP.NET Web API as our framework.  In this scenario, I built a custom SharePointController that calls a business component that does all the idiosyncratic SharePoint API work and returns a very simple pre-processed JSON object.

image2

The code for the ASP.NET WEB API controller is very simple because all the processing logic is in a separate service class.

public class SharePointController : ApiController
     {
         [Route(“NewsLatest”)]
         public IEnumerable<SharePointNewsModel> GetLatestNews()
         {
             SharePointNewsService service = new SharePointNewsService(URL, UserName, Password);
             return service.LatestNews();
         }

    }

Everything is mediated through a custom developed API – this makes it ridiculous easy to allow web developers to focus on HTML, JavaScript and CSS instead of needing to trace through the SharePoint APIs.

Additional Advantages to this Approach

There are some additional advantages to this approach vs. going straight at the APIs:

  • Microsoft has a tendency to change their APIs, to add new ones or to move them around.  Using a custom API that you control allows you to insulate and centralize the APIs.

  • Using a custom API opens up the possibility of remapping of security identifies or using a service account type approach instead of always assuming the user is a SharePoint user that has logged into Office 365.
  • If you start with SharePoint on-premise and the move to Office 365, this approach will insulate your code from the migration.
  • You can add all sorts of business logic that is useful and interesting before sending the results back to the page.  For example, you could filter the results, limit the results, reformat the HTML, add additional lookup data, etc.  which all sits within your centralized API.
  • If you decide to dump SharePoint and replace it with something else, you’ve isolated it from your JavaScript.
  • You could introduce performance enhancing caching approaches such as storing results in memory or pre-rendering in order to make fetching the data much faster.  Even logging into Office 365 for example takes a couple seconds so having some method to cache content is going to be important for a high volume or high performance web application.

These are in addition to making your web developers lives simpler and allowing them to treat SharePoint as a content repository instead of understanding the guys of the entire platform.

Read More

Integrating WordPress and Azure Search with new Microsoft Azure Search SDK

As previously posted, Azure Search has been promoted to General Availability.  In February, I posted a detailed article on how to integrate Word Press with Azure Search using the Azure Search Preview APIs.  This article describes the same approach but with updated code using the new Azure Search SDK.  The latest code is committed to GitHub here.

In addition, I have now fully deployed the code to this blog so you can try it out…let me know what you think!

Getting Started

In order to integrate WordPress and Azure Search, the basic flow for data is:

clip_image001

In order to pull posts from WordPress, install the JSON REST API plugin found here (or in the plugin gallery). 

To create a custom WebJob, use the latest Azure SDK and Visual Studio 2013.  Once you have installed the Azure SDK, you’ll see a project template for Azure WebJobs. 

To use the Azure Search service, you need to create a search service in Azure.  See this article for directions on how to do this through the Azure Portal.

To access the Azure Search API, you can go through the REST API directly, or you can use the Microsoft Azure Search SDK.  To install the client into your WebJob, you run the NuGet package console and enter “Install-Package Microsoft.Azure.Search -Pre”.  This also installs the NewtonSoft JSON.NET library which we can also use for interacting with the WordPress REST API.

WebJobs Architecture

When you create a WebJob in Visual Studio, it provides the ability to deploy straight to your Azure Web Site.  This works really well.  Alternatively, you can upload it manually as an .exe through the portal.  You can also run your WebJob locally in debug mode which in this case works perfectly because we have no real dependencies on Azure Web Sites to run the job.

The basic components of the architecture are:

  • Program: the main web job console app.

  • WordPressJSONLoader: service class responsible for pulling posts from WordPress
  • WordPressPosts and WordPressPost: value objects representing the loaded collection of wordpress posts and each individual post.
  • AzureSearchIndexer: service class responsible for pushing posts into Azure Search.

Runtime configuration is done through the App.config and/or the Azure Web Sites configuration.  As part of the Azure SDK you can use the CloudConfigurationManager to get environment settings and it is smart enough to use values in the Azure Web Sites configuration as priority over any settings found locally in the App.Config.  If you are running locally, it degrades automatically to looking in your App.Config for configuration values. 

// load configuration attributes webSiteURL = CloudConfigurationManager.GetSetting("WebSiteURL"); searchServiceName = CloudConfigurationManager.GetSetting("ServiceName"); searchServiceKey = CloudConfigurationManager.GetSetting("ServiceKey"); indexName = CloudConfigurationManager.GetSetting("IndexName");

Retrieving Posts from WordPress

With the JSON REST API plugin installed, retrieving posts from WordPress is easy – just call the URL www.yourwebsite.com/?json=get_posts.  This will by default retrieve the last 10 posts but you can use filtering parameters and paging to change how many posts you retrieve.

Using the JSON.API library, you can deserialize your JSON into a JObject which provides you an easy way to pull entities such as posts, comments, etc. out of the returned JSON.

When the JSON REST API is called, it provides 10 posts and the number of “pages”.  Based on this number of pages, we can pull all the posts 10 posts at a time.

In this method, we simply pull out the posts and deserialize these to a collection of WordPressPost objects. 

One of the key changes to the Microsoft Azure Search SDK from the RedDog.Search client that was previously available is both async and regular methods are provided which makes the code a little bit simpler in a console application.

Note: One bug in the JSON API I found is that the excerpt field contains the JetPack plugin’s share button HTML if you have it activated.  In my code, I strip these out to only take the first paragraph representing the excerpt text.

/// <summary> /// Loads WordPress posts from any WordPress blog. /// </summary> /// <param name="URL">WordPress blog URL</param> /// <returns></returns> public static WordPressPosts LoadAllPosts(string URL) { try { WordPressPosts wordPressPosts = new WordPressPosts(); string query = "?json=get_posts"; WebClient client = new WebClient(); Stream stream = client.OpenRead(URL + query); StreamReader reader = new StreamReader(stream); var results = JObject.Parse(reader.ReadLine()); var JsonPosts = results["posts"]; if (JsonPosts != null) { foreach (var JsonPost in JsonPosts) { wordPressPosts.Posts.Add(loadPostFromJToken(JsonPost)); } } if (results["pages"] != null) { int pages = (int)results["pages"]; if (pages > 1) { for (int i = 2; i <= pages; i++) { query = "?json=get_posts&page=" + i; stream = client.OpenRead(URL + query); reader = new StreamReader(stream); results = JObject.Parse(reader.ReadLine()); JsonPosts = results["posts"]; foreach (var JsonPost in JsonPosts) { wordPressPosts.Posts.Add(loadPostFromJToken(JsonPost)); } } } } return wordPressPosts; } catch (Exception e) { throw; } }

Creating an Index

Creating an index is reasonably easy but I found a few gotchas along the way:

  • The key field MUST be a string (I originally tried to use an integer field).

  • Searchable fields MUST be of type string (I originally tried to make a date field searchable). 

If you try to violate the rules, the Index creation process fails and the result returned will be an error.

The new create index method looks like this:

/// <summary> /// Loads WordPress posts from any WordPress blog. /// </summary> /// <param name="URL">WordPress blog URL</param> /// <returns></returns> public static WordPressPosts LoadAllPosts(string URL) { try { WordPressPosts wordPressPosts = new WordPressPosts(); string query = "?json=get_posts"; WebClient client = new WebClient(); Stream stream = client.OpenRead(URL + query); StreamReader reader = new StreamReader(stream); var results = JObject.Parse(reader.ReadLine()); var JsonPosts = results["posts"]; if (JsonPosts != null) { foreach (var JsonPost in JsonPosts) { wordPressPosts.Posts.Add(loadPostFromJToken(JsonPost)); } } if (results["pages"] != null) { int pages = (int)results["pages"]; if (pages > 1) { for (int i = 2; i <= pages; i++) { query = "?json=get_posts&page=" + i; stream = client.OpenRead(URL + query); reader = new StreamReader(stream); results = JObject.Parse(reader.ReadLine()); JsonPosts = results["posts"]; foreach (var JsonPost in JsonPosts) { wordPressPosts.Posts.Add(loadPostFromJToken(JsonPost)); } } } } return wordPressPosts; } catch (Exception e) { throw; } }

Adding Posts to an Index

Now that we have our index, we can push posts into the index.  One of the new features of the Azure Search SDK is that you can pass rows in as objects and it will use reflection to convert the properties into field values. 

We have a class called WordPressPost that represents each post with its appropriate fields.

/// <summary> /// Value object representing a single WordPress post. /// </summary> public class WordPressPost { public string Id { get; set; } public string Status { get; set; } public string Title { get; set; } public string Content { get; set; } public string Excerpt { get; set; } public DateTime CreateDate { get; set; } public DateTime ModifiedDate { get; set; } public string CreateDateAsString { get; set; } public string ModifiedDateAsString { get; set; } public string Author { get; set; } public string Categories { get; set; } public string Slug { get; set; } public string Tags { get; set; } }

To add the post, we add the objects as an array and create an IndexBatch object like this:

try { DocumentIndexResponse response = indexClient.Documents.Index(IndexBatch.Create(BatchOfWordPressPosts.Select(doc => IndexAction.Create(doc)))); } catch (IndexBatchException e) { Console.WriteLine( "Failed to index some of the documents: {0}", String.Join(", ", e.IndexResponse.Results.Where(r => !r.Succeeded).Select(r => r.Key))); }

In the previous RedDog Azure Search library, there was a maximum of 1000 items per batch.  I haven’t found any maximum number of items per batch limitation yet for the new SDK, but I left in the code that limits the number of items to a 100 items per batch. 

Checking our Index in the Portal

We can verify that we have content in the index by going to the portal and checking out our index:

image_thumb3

As shown, we have a newly created index with 285 items.

Building a Search Portal

Now that we have some content, let’s build a simple search interface using just HTML and JavaScript.  We’ll use the REST APIs to fetch data from the index and display the search results using Angular.JS as a framework.

Publishing to Azure Web Sites into a Virtual Application

Our WordPress site has been installed into the root of the Azure Web Site.  When we publish our search pages and JavaScript code, we don’t want them clobbering our existing WordPress site or getting deleted or mangled by mistake if there is an upgrade to WordPress.

Azure Web Sites supports the addition of virtual applications that run in their own sub-directory.  To create one, go into the Configure tab of the Azure Web Site and go to the bottom of the page.  You will see a section called “virtual applications and directories”.  In here, we can create a completely separate application that runs in its own directory, with its own web.config and publishing profile.

clip_image001[6]

In Visual Studio, you can configure the publishing profile to publish to this new virtual application.

image_thumb4

Specify the subdirectory in both the Site Name and Destination URL fields.

Fetching the Search Results With AngularJS

Building a search form using AngularJS is ideal for pulling in data from Azure Search because Azure Search returns JSON data by default.  We can simply assign the results to an AngularJS variable and then use the AngularJS framework to display the results dynamically.

We start with a basic Search form styled using Bootstrap.  I use the Sparkling Theme for my WordPress blog and this them already uses Bootstrap as its core CSS framework so adding in some custom HTML using the same Bootstrap CSS elements works really well.

image_thumb5

The nice thing with using Bootstrap is that if you switch your WordPress theme, as long as it uses Bootstrap (most of them do these days) your search form and results will take on the style of your blog.

If you perform a search with no keywords specified, Azure Search will return ALL documents.  This isn’t something we would want so we have made keyword a required field and check to ensure it isn’t blank before submitting.

The submit method for fetching the Azure Search results is the key for pulling in the results from Azure Search.  In building this method, I found a few gotchas to share:

  • Make sure you include the api-version in the request or Azure Request will return an error.

  • The default order by is relevance.  In our case, we have also added an additional option to sort by Create Date (e.g. $orderby=CreateDate desc.
  • You have to include the api-key in the HTTP header when you send in the request.  You can create a Query key in the azure portal instead of using the admin key and having it public.
  • You assign the JSON object “value” – this contains the search results.

vm.submit = function (item, event) { if (vm.orderby == "Relevance") var URLstring = vm.URL + "?search=" + vm.keywords + "&api-version=" + vm.APIVersion; else var URLstring = vm.URL + "?search=" + vm.keywords + "&$orderby=CreateDate desc" + "&api-version=" + vm.APIVersion; if (!isEmpty(vm.keywords)) { var responsePromise = $http.get(URLstring, config, {}); responsePromise.success(function (dataFromServer, status, headers, config) { vm.results = dataFromServer.value; vm.showSearchResults = true; }); responsePromise.error(function (data, status, headers, config) { alert("Submitting form failed!"); }); } else { vm.showSearchResults = false; vm.results = []; } }

Displaying the Results

Once we have a JSON object with the search results, displaying them is pretty easy – just use the AngularJS ng-repeat attribute to iterate through the results returned.

One key note is the use of a filter to treat the HTML returned as HTML – by default AngularJS will HTML encode the HTML instead of letting it through raw.  In order to change this behaviour, you can add this function:

angular.module('app').filter('unsafe', function ($sce) { return function (val) { return $sce.trustAsHtml(val); }; });

Using this filter you can then declare the variable as unsafe and it will be allowed through as raw HTML.

Adding a link to the original post is easy – just create an anchor link with the ID of the post.  (You could also use the slug variable that is indexed if permalinks are turned on for more friendly URL’s).

Integrating into WordPress

With the solution published to Azure Web Sites into a Search subdirectory, we can use the published JavaScript files and embed them into our WordPress site.  While a proper WordPress plugin would be ideal, we just added the search.html code into a WordPress page using the out of the box content editor.

Note: when adding HTML into a page using the text editor in WordPress, if you lead any line feeds WordPress converts them into <p> tags.  This isn’t what we want with all our javascript and AngularJS code.  If you delete all the line feeds and keep all the HTML together, you can mitigate this problem.


Adding a Search Form on the Home Page

In addition to the search results page, we can add a widget to include a basic search form on the home page.  You can embed the HTML for the form using the widget editor and adding a text widget.

image_thumb6

Reading Query Data from JavaScript

In order to read the submitted form from the home page to the search results page, we need to read the posted values that are included in the query string.

I found a basic JavaScript function that parses the query string and looks for incoming search parameters.  I then load these into the AngularJS controller and execute a search on the initial page load.

function getUrlParameters(parameter, staticURL, decode) { /* Function: getUrlParameters Description: Get the value of URL parameters either from current URL or static URL Author: Tirumal URL: www.code-tricks.com */ var path = (staticURL.length) ? staticURL : window.location.search; if (path.indexOf("?") >= 0) { var currLocation = path, parArr = currLocation.split("?")[1].split("&"), returnBool = true; for (var i = 0; i < parArr.length; i++) { var parr = parArr[i].split("="); if (parr[0] == parameter) { return (decode) ? decodeURIComponent(parr[1]) : parr[1]; returnBool = true; } else { returnBool = false; } } } else returnBool = false; if (!returnBool) return false; }

The Final Result – Search Results!

Here is the final result – a fully functioning search page that pulls WordPress posts from Azure Search and searches against keywords with the results sorted by either relevance or create date.

image

Read More

Office 365 and OneDrive APIs Now have CORS Support: Key for JavaScript Apps

JavaScript, by default, implements a “Same Origin Policy’”, which means that JavaScript can only make calls back to its originating domain.  For application developers using JavaScript to call external services through REST APIs, this is a big limitation as these services can live anywhere on the Internet across multiple domains.

Cross-origin Resource Sharing (CORS) is a standard mechanism to allow JavaScript applications to make call across domains.  The specification defines a set of headers in the HTTP call that allow the browser and the server to negotiate authorization as requests cross domains. 

Microsoft has just announced that CORS support is now available for Office 365 APIs, specifically the Sites APIs and the OneDrive APIs.  Mail/Calendar/Contacts APIs will support CORS soon.

The support for CORS is an ongoing evolution of the Office 365 APIs to support JavaScript and frameworks such as AngularJS as first class programming frameworks and to remove the need for server side code to work with Office 365 from your JavaScript code. 

Read More

Angular 2 Being Developed On TypeScript

TypeScript is an open source framework that sits on top of JavaScript to enable developers to build better JavaScript by supporting class based object oriented programming and static typing.  It is developed by Microsoft but is open source and available on GitHub.  Programs written in TypeScript compile to plain JavaScript.

TypeScript is available as an extension for Visual Studio or as a Node.js package.

Today, it was announced that Angular 2 will be built on top of TypeScript.  Microsoft has been working with Google to merge their efforts.

Google had been developing an independent scripting language framework called AtScript on top of TypeScript.  This will now be abandoned and merged into the core TypeScript library as the two companies work on Angular 2 together!

Read More