Thursday, June 14, 2018

Creating a PowerApp with a SharePoint list as its datasource

PowerApp is a Microsoft service that lets you quickly create apps for displaying and manipulating data. In this blog post, we will be going through how to create a PowerApp for saving, viewing, and editing, info that is saved as a SharePoint list and is accessible to everyone in an organisation.

First, we will look at creating a SharePoint list from an Excel sheet.

1. Open an existing Excel sheet that you want to use for your PowerApp. Select the data in the Excel sheet and format it as a table.

2. Login to SharePoint. From the New tab, select App.

3. Search for “Spreadsheet” and click Import Spreadsheet from the search result.

4. Browse the Excel sheet you used in Step 1 and click Import.

NOTE: You might face one of these issues when trying to import the Excel sheet to SharePoint as a list.
- Error: “Specified file is not a valid spreadsheet or contains no data to import.”
Solutions that worked for me: add SharePoint URL to your browsers trusted sites list.
- Error: “This feature requires a browser that supports ActiveX controls.”
Solution that worked for me: as I was using Google Chrome on a Windows machine, installing the Internet Explorer (IE) tab extension on Google Chrome, then open SharePoint from the IE tab. The IE tab is not available for OSX.

You have successfully imported the Excel sheet table as a list in SharePoint.

5. We will now use this list to create a PowerApp. To do this, from your list, click on the PowerApps drop-down control, on the menu bar, and select Create form app.

A new PowerApp with the SharePoint list as datasource will be automatically generated. This PowerApp displays a list of items. You can view the item details by clicking on (>) next to each item. You can also edit the details of an item using this PowerApp.

Your PowerApp is now ready to be published so that it can be useful to everyone in the organisation.

Sabina Pokhrel

Monday, June 4, 2018

Cross-Referencing using Adobe InDesign

Documentation is sometimes the most critical part of a project. Whether it is internal, or product documentation, finding a way to automate documentation can save a lot of time, and reduce the amount of mistakes that can be made if changed manually.

At nsquared, all our software comes with an extensive user guide for customers to follow, so it’s important for us to automate as much as possible.

The following steps will show you how to create Text Anchors, and create Cross-References to these anchors to automate referencing within your document. For example, if there are references in your document, such as, “…refer to Chapter 5 on page 54”, the chapter name and page number will be automated. If the chapter moves from page 54, the reference will automatically update.

Note: This guide assumes that you have an intermediate level of experience with InDesign.

Creating Text Anchors

1. With your document open, identify the text you would like to define as Text Anchors. Typically, these Text Anchors are chapters, headings, and sub headings. For example, Chapter 5.
2. Once you have identified the Text Anchors, highlight the text.
3. Open then Hyperlinks window by going to the Window menu, Interactive > Hyperlinks.
4. With the text highlighted, click the hamburger menu icon in the top right corner of the Hyperlinks window, then click New Hyperlink Destination.
5. A pop up window will appear. From the Type dropdown, select Text Anchor.
6. Give the Text Anchor a Name. It is recommended that the name of the Text Anchor is the same as the text highlighted. This will make it easier to Cross-Reference later, which we will cover in the next section. For this example, we will call the Text Anchor Chapter 5.
7. Click OK.
8. Repeat steps 4-7 to create all the Text Anchors in your document.

Inserting Cross-References through your document

1. Now it is time to reference the Text Anchors. Click where you want to insert a reference in your document. For example, taking the example from above, “…refer to Chapter 5 on page 54”.
2. Go to the Type menu, Hyperlinks & Cross-References > Insert Cross-Reference.
3. From the Link To dropdown, select Text Anchor.
4. From the Document dropdown, make sure the document you are working on is selected.
5. From the Text Anchor dropdown, select the correct Text Anchor for this reference. For this example, we will find Chapter 5.
6. From the Format dropdown, select the format you wish. These formats can be edited further by clicking the pencil icon.
7. Click OK.
8. Repeat steps 1-7 to insert all Cross-References in your document.

Now you have automated all references throughout your document. Now, if you rename a Text-Anchor, or it has moved to a different page, all Cross-References will automatically update.

No more incorrect referencing!

Jessica Ayad

Wednesday, May 30, 2018

Test Driven Development (TDD)

What is TDD? How is it different from Unit Tests? How many tests should one write when using a TDD approach? - These and many more questions come to our mind when we think or decide to take a TDD approach.

TDD stands for Test Driven Development and is different from writing Unit Tests Unit Tests refer to what you are testing, -where-as TDD describes when you are testing. To simplify this, with Unit tests we test and verify the smallest possible unit of behavior, wherein with TDD the tests drive the development. We can say that Unit tests are a part of a TDD approach where we write tests before writing the code. It can include Unit tests, functional tests, behavioral tests, acceptance tests, etcetera.

The idea looks simple in theory, and represents a fundamental change to approaching software development.

Red-Green-Refactor cycle:
The key to TDD is the Red-Green-Refactor cycle. Write tests that fail, fix the code and run the tests again - repeat this until they pass. The below diagram explains it well:


RED - Write a failing test which captures the requirements.
GREEN - Implement the functionality by writing just enough code to pass the test.
REFACTOR - Refine/improve the code without adding any new functionality.

And then repeat the whole cycle.

In my opinion, it is always helpful to see the code tested upfront using this painless testing approach. It gives a sense of confidence to us before we start end-to-end tests for the project. Not only this, I believe it gives a fair indication of code coverage, fewer defects and easy maintenance as well.

As important as it is to start with this approach, it is equally important to make modifications, whilst continuing using the approach. Considering a real-world situation, applications change and over time a method may be removed/added/modified. While making any modifications in the code at a later stage in time, we should run all the tests written until now to ensure that we did not break any functionality while adding/modifying code. In my experience, this reduces the testing time by more than half.

It is very important to monitor the code coverage though the development/maintenance cycle of the application. With code coverage, we get to know if any code is not being called by a test. The two scenarios that will be applicable here are: the code is missing a corresponding test, or it is dead code and needs to be removed.

As writing-running-fixing tests consumes time, it is very tempting to put the writing of tests on the back-burner. The most pressing and difficult aspect of this is to keep the discipline and continue the practice. When it comes to shipping robust, high quality products the benefits of this approach are rewarding!

At nsquared we are working using TDD on our latest products. If you want to find out more about how we work, please get in touch.

Tripti Wani

Monday, May 21, 2018

Documentation for today's programmer

When creating documentation, whether for a project, lab, or technical, you would have run into the challenge of needing to move that documentation to different formats: PDF is a popular one, but perhaps also HTML, particularly if your company uses a wiki for such things. At nsquared, we found that this movement of documents can get frustrating, not only because they do not always come across cleanly, but also because if they are different, you then have to maintain a bunch of different documents. Time to solve this, using tools which are freely available: Markdown, Pandoc, and PowerShell.

The solution is reasonably simple. You can still write your document up in your favourite word processor, however, keep the formatting to a minimum (avoid anything more complex than bold, italics, and hyperlinks; also, you can add images, but do not do it in your word processor). Once you have your file ready, save it out as a .docx, so that we can get underway in earnest. The first part covers converting your document to the Markdown format.

Steps to convert from docx to Markdown:
1. Download and install Visual Studio Code.

  • - We will use this to edit your document later, but essentially this will be your go to program very soon.
2. Download and install the Pandoc installer (download the latest Windows ‘x86_64.msi’ file).
  • - Pandoc is a freely available program online, which will handle the conversion of your documents. It supports a host of outputs, including: docx, HTML, Markdown, PDF, latex, and txt, just to name a few.
3. If you are running Windows 10, you will already have PowerShell available to you. This solution is written for Windows, though is transferable to Apple Mac, via the use of Terminal. Once you are ready, launch PowerShell (found by typing 'power' into the search of the Windows menu).
4. You now need to navigate PowerShell to the location of your document. Generally, it will start in your user folder (C:\Users\YourAccount). You can use the 'cd' command, plus the path of your document to get there quickly:

  • - Type: cd 'C:\Users\YourAccount\Documents'.
  • - Make sure to replace the path section with the location of your document (the above uses 'Documents' as that location). 
5. With the above completed, you will notice that the path that PowerShell is using is what you just typed - this means that it is now using this location as the point from which to execute commands.
6. Now it is time to utilise Pandoc. In PowerShell, type: pandoc 'YourDoc.docx' -f docx -t markdown -s -o ''

  • - Make sure to substitute 'YourDoc' with the name of your current document, and update 'YourNewDoc' to have a name which you want, for your converted file.
  • - If you want to know about the commands available for Pandoc, make sure to visit their documentation page.
7. With that all in place, press the Enter key on your keyboard to run the command.
8. Your document will be converted to Markdown (though retaining the original document, though you will not need it by the end, so do with it what you like).
9. You have successfully converted your document from .docx to Markdown, the next part is to update your document using Markdown.

Steps to update your Markdown document for easy conversion:
1. Open Visual Studio Code. Once open, you will be presented with a (mostly empty) window. This might look familiar in part, if you have used any other Visual Studio program; Visual Studio Code is the light weight version, and it is remarkably powerful, and allows you to easily write in many programming languages.

2. Click File > Open File, and then browse your files for your new Markdown file (we are opening it in VS Code).
3. With your file now open in Markdown, you will notice that it is looking very plain. This is the power of Markdown, it utilises only the most basic of formatting, however, this allows it to easily be converted into other formats.
4. Here is an excellent cheat sheet of how Markdown works. Have it at the ready, for the next few steps.
5. Now you will need to open the preview page (which shows you what your document will look like with Markdown applied). With your document open, navigate to the top right, and click the split window icon, with the magnifying glass in front of it:

6. You will be presented with a panel to the right of your Markdown document, which is showing you what the output will be. You will notice, all the Markdown tags (#/*/---/```) are gone, and just plain text appears, with light formatting.

7. Now, using the cheat sheet as a guide, update your Markdown document, so that it presents how you would like it.
8. With your Markdown complete, close Visual Studio Code, because, you are ready for further conversion!

Steps to convert your Markdown to HTML:
1. Open PowerShell once more, and navigate to your Markdown document:

  • - Remember to use the ‘cd’ command, and the path to your document – cd 'C:\Users\YourAccount\Documents'
2. Once PowerShell is in the same location as your Markdown document, use the following command to convert from Markdown to HTML:
  • - Pandoc '' -f markdown -t html -s -o 'YourNewWebpage.html'
3. Your document is now in HTML! If you had images and set them up correctly (according to the cheat sheet), they will have come across cleanly, creating a ‘media’ folder along the way, to use with your new webpage.

As you will realise, from now on, you simply need to maintain your Markdown document, and then you can convert it; as mentioned previously, this works for PDF and docx too, so you can always produce those formats if you need them.

This is the start of your Markdown journey, though, expect to continue and go further. Through using Pandoc, and PowerShell, you could put together a PowerShell script to automatically convert your latest Markdown document to HTML, so that you can keep working on your documents, without worrying about the export process. This is an excellent workflow and may help you increase efficiency!

Elliot Moule

Tuesday, May 8, 2018

Bringing your 3D Models into Unity

When you are working across multiple programs in technology, it’s important for any designer to be aware of what file formats are suitable for the program you’re using. There is nothing worse than working long and hard on making something look great in one program, and having an error on it when dragged into another!

3D is lots of fun and very straight forward to make simple models to deck out any scene in Unity, so I’m going to walk you through a simple procedure of exporting for a 3D model into Unity. My preferred 3D modelling software is Autodesk Maya but you can use any you wish, as long as you can export the model into an FBX file.

Why .fbx?
An fbx file is a 3D asset exchange format that is compatible with many 3D tools. It also enables you in most cases to save your materials on the object if desired. As opposed to an OBJ file, your capabilities are much larger.

Getting started:
Jump into Maya and create your model. I have made a simple lamp to use as an example. Note: you want your model to be on the lower side of the poly count. To keep track of the count, select Display > Heads Up Display > Poly Count. The count will appear in the top left corner.

You can apply your materials to the object in your 3D software, OR Unity. For the sake of showing you how to do so in Unity I’m going to leave my model without a material in Maya.

A few things to check before exporting:
- It is a good idea to combine your meshes. You can do this by holding the mouse and dragging a box over all the objects and selecting Mesh > Combine.
- The ‘Up Axis’ should be set to ‘Y’.
- Your model is on the ground plane, with all location and rotation values set to 0 in the ‘Attribute Editor’ panel.

Getting your model export-ready is very straight forward. Ensure the alignment is correct and everything is squared-out and facing forward toward the positive Z-Axis. When you’re ready, go to File > Export All. Select FBX under ‘Files of Type’ to export in the correct file format. Name your file and place it somewhere you can easily access when you open up Unity to import the object.

Once you’re in Unity:
Once you’ve got a new or existing scene set up in Unity, simply drag and drop into the Assets panel. Once you can see your model, drag it into your scene. You can then right click in the Project panel and click Create > Material. Now for the fun part! You can drag and drop a PNG if you’d like, or select a colour from the top right Inspector panel. It’s here you can also customise your material by toggling between the X and Y options in the tiling section.

So there you have it!
A basic guide on how to get your 3D objects into Unity correctly as well as a little bit of customisation. Happy designing!

Jacqui Leis

Sunday, April 29, 2018

Adding AI to Mixed Reality

Over the last few months I have had the privilege of helping numerous Microsoft Partners get started with building Artificial Intelligence into their Mixed Reality applications.

It might help you to understand what I mean by Artificial Intelligence, as it is a heavily overloaded term used to describe everything from an algorithm to predict what you might like to buy next, through to a future technology that will run the world and take all of your jobs. For the purpose of this article I will limit the term of AI to describe a set of algorithms to help determine a result of a specific query with a certainty high enough to be useful by the customer making the query. For example, given a sentence spoken by a customer, the algorithm has an 80% (or greater) confidence that the intention of the sentence was to order a specified item for delivery at a given time.

One of the aspects of almost all AI is that software developers are no longer working with clear binary results (1 or 0, on or off), instead, with AI algorithms, the result is a percentage of certainty of correctness, often termed confidence.

Working with this confidence, the application can modify the experience for the customer.
You might be asking why this is interesting for a Mixed Reality application?
With the example I just provided, of understanding the intention of a spoken command, a Mixed Reality application can become far more useful. If you have ever worn a headset, VR, AR, or MR, you will know that the controls for input are limited. Either using hand controllers or simple hand gestures is often not enough to control a complex application or environment. Speech is a great way to enhance the interface. When the speech can be in the form of natural language input that an algorithm can translate into an intention the application can act upon the experience for the customer is greatly improved.

In the one week workshops, the developers learn how to use computer vision services to recognize the objects that a camera is seeing, translate text between languages, understand the intention of a natural language command, and even build their own machine learning algorithm from scratch. The developers then take these lessons and build out a demo or proof of concept application they can take back to their workplace.

One thing that is becoming clear is that while 5 years ago you would have struggled to find things you use every day that utilize some form of AI, in the coming years you will find it hard to find any technology that doesn’t take advantage of some form of AI.

Dr. Neil Roodyn

Monday, April 23, 2018

Experiences with Microsoft’s Azure Face API

In the last few weeks I have been working with Microsoft’s Azure based Face API.

If you have never used the API you might well be surprised by how extensive the information about each face returned by the API can be. Here is just a small part of the information that comes back:
1. The coordinates of the face inside the scene.
2. The amount the face is tilted.
3. A guess of the person’s age.
4. How much the person is smiling.
5. Whether the person is wearing glasses or not.
6. Whether the person has facial hair or not.
7. Whether the person is male or female.
8. A guess at the emotional state of the person.

All the above as well as very detailed information about positions of features in the face can be obtained.

The way in which the API is used has been designed to be very straightforward.

To be able to recognize a face, the Microsoft engine in Azure needs to have some sample images of the face. The set of samples is called the training set, and the project I worked on started by sending a set of images to Azure for each of the people we wanted to recognize later in our project.

When the time came to recognize people, we set up a camera connected to a PC and every few seconds sent the current camera image to Azure asking the Face API to tell us if any faces were in the image.

If a single person walked up to the camera, the response would be that there is one face in the image we had sent. The Face API is quite capable of picking up many faces in a single image (for instance where the image shows a number of people seated around a table).

Once we know there are faces in an image, we need to use a different function in the Azure Face API where we send just the area around a face to Azure and ask whether that face belongs to someone in our training sets. The response we get back is not just a yes/no response, but a probability of how likely it is that the face we sent matches someone. Generally, we would choose the highest probability match (if there is one).

In our project we wanted a PC app to trigger an activity whenever someone the app knew came into range of the camera. In effect we would also know when they had left as we would stop seeing them through the camera.

The Face API made it easy for us to set up the project and begin testing. At that stage, we began to realize it was not all quite so simple.

The first sign was that people who walked past the camera in profile were not recognized. Actually, they weren’t even detected as faces! After some investigation it was possible to determine a list of circumstances that were likely to have an impact on whether someone was going to be matched.

The first step in getting a match as noted above is to detect that there is a face in an image. This step, we discovered can be affected by quite a few things. Here is a partial list:
1. A person’s head should not be turned away from the camera by more than about 45 degrees.
2. If the camera is positioned too far above the mid-line of the face, no face is detected. Similarly, even if the face and camera are at the same level but the person turns their face too far up or looks down too far, no face is detected.
3. If the face is tipped too far from vertical with respect to the camera, a face will not be detected.
4. The mouth should not be covered.
5. The nose should not be covered.
6. Each eye should be visible or at most obscured just by no more than a finger width.
7. Ears, forehead and chin do not need to be visible.
8. Placing a hand against the side of the head or chin does not prevent detection.
9. Beards, moustaches and glasses do not prevent detection.
10. Strong backlighting (e.g. A large window behind a person) can make detection impossible.

Even if a face is detected, the face may fail to match against the training set due to other problems:
1. If the place/camera where the training set was collected is different to the where the recognition is to be done, the success rate in matching may be lowered.
2. If the resolution of the cameras used for training and for recognition are very different, the success rate in matching may be lowered.
3. If the camera resolution is high (e.g. 1920x1080), matching is easily achieved at 2 metres distance from the camera. If the camera resolution is low (e.g. 640x480), matching at 2 metres from the camera becomes difficult.
4. If the facial expression at recognition time is too different to the expression used in the training set (e.g. mouth open at recognition, while the training images all had mouth closed), recognition may fail.

Achieving a reliable result in a project once you know more about the characteristics of the API becomes not just a matter of putting some code together. The project design may need to juggle with the position of the camera, perhaps using more than one camera. Some thought will also need to go into lighting and possibly devising techniques to compensate for perfectly normal face obscuring activities such as people simply turning their heads.

Peter Herman