Thursday, June 14, 2018

Creating a PowerApp with a SharePoint list as its datasource

PowerApp is a Microsoft service that lets you quickly create apps for displaying and manipulating data. In this blog post, we will be going through how to create a PowerApp for saving, viewing, and editing, info that is saved as a SharePoint list and is accessible to everyone in an organisation.

First, we will look at creating a SharePoint list from an Excel sheet.

1. Open an existing Excel sheet that you want to use for your PowerApp. Select the data in the Excel sheet and format it as a table.




2. Login to SharePoint. From the New tab, select App.




3. Search for “Spreadsheet” and click Import Spreadsheet from the search result.




4. Browse the Excel sheet you used in Step 1 and click Import.



NOTE: You might face one of these issues when trying to import the Excel sheet to SharePoint as a list.
- Error: “Specified file is not a valid spreadsheet or contains no data to import.”
Solutions that worked for me: add SharePoint URL to your browsers trusted sites list.
- Error: “This feature requires a browser that supports ActiveX controls.”
Solution that worked for me: as I was using Google Chrome on a Windows machine, installing the Internet Explorer (IE) tab extension on Google Chrome, then open SharePoint from the IE tab. The IE tab is not available for OSX.

You have successfully imported the Excel sheet table as a list in SharePoint.



5. We will now use this list to create a PowerApp. To do this, from your list, click on the PowerApps drop-down control, on the menu bar, and select Create form app.



A new PowerApp with the SharePoint list as datasource will be automatically generated. This PowerApp displays a list of items. You can view the item details by clicking on (>) next to each item. You can also edit the details of an item using this PowerApp.



Your PowerApp is now ready to be published so that it can be useful to everyone in the organisation.

Sabina Pokhrel

Monday, June 4, 2018

Cross-Referencing using Adobe InDesign

Documentation is sometimes the most critical part of a project. Whether it is internal, or product documentation, finding a way to automate documentation can save a lot of time, and reduce the amount of mistakes that can be made if changed manually.

At nsquared, all our software comes with an extensive user guide for customers to follow, so it’s important for us to automate as much as possible.

The following steps will show you how to create Text Anchors, and create Cross-References to these anchors to automate referencing within your document. For example, if there are references in your document, such as, “…refer to Chapter 5 on page 54”, the chapter name and page number will be automated. If the chapter moves from page 54, the reference will automatically update.

Note: This guide assumes that you have an intermediate level of experience with InDesign.


Creating Text Anchors

1. With your document open, identify the text you would like to define as Text Anchors. Typically, these Text Anchors are chapters, headings, and sub headings. For example, Chapter 5.
2. Once you have identified the Text Anchors, highlight the text.
3. Open then Hyperlinks window by going to the Window menu, Interactive > Hyperlinks.
4. With the text highlighted, click the hamburger menu icon in the top right corner of the Hyperlinks window, then click New Hyperlink Destination.
5. A pop up window will appear. From the Type dropdown, select Text Anchor.
6. Give the Text Anchor a Name. It is recommended that the name of the Text Anchor is the same as the text highlighted. This will make it easier to Cross-Reference later, which we will cover in the next section. For this example, we will call the Text Anchor Chapter 5.
7. Click OK.
8. Repeat steps 4-7 to create all the Text Anchors in your document.


Inserting Cross-References through your document

1. Now it is time to reference the Text Anchors. Click where you want to insert a reference in your document. For example, taking the example from above, “…refer to Chapter 5 on page 54”.
2. Go to the Type menu, Hyperlinks & Cross-References > Insert Cross-Reference.
3. From the Link To dropdown, select Text Anchor.
4. From the Document dropdown, make sure the document you are working on is selected.
5. From the Text Anchor dropdown, select the correct Text Anchor for this reference. For this example, we will find Chapter 5.
6. From the Format dropdown, select the format you wish. These formats can be edited further by clicking the pencil icon.
7. Click OK.
8. Repeat steps 1-7 to insert all Cross-References in your document.


Now you have automated all references throughout your document. Now, if you rename a Text-Anchor, or it has moved to a different page, all Cross-References will automatically update.

No more incorrect referencing!

Jessica Ayad

Wednesday, May 30, 2018

Test Driven Development (TDD)

What is TDD? How is it different from Unit Tests? How many tests should one write when using a TDD approach? - These and many more questions come to our mind when we think or decide to take a TDD approach.

TDD stands for Test Driven Development and is different from writing Unit Tests Unit Tests refer to what you are testing, -where-as TDD describes when you are testing. To simplify this, with Unit tests we test and verify the smallest possible unit of behavior, wherein with TDD the tests drive the development. We can say that Unit tests are a part of a TDD approach where we write tests before writing the code. It can include Unit tests, functional tests, behavioral tests, acceptance tests, etcetera.

The idea looks simple in theory, and represents a fundamental change to approaching software development.

Red-Green-Refactor cycle:
The key to TDD is the Red-Green-Refactor cycle. Write tests that fail, fix the code and run the tests again - repeat this until they pass. The below diagram explains it well:


Source: https://centricconsulting.com/case-studies/agile-test-driven-development/

Workflow:
RED - Write a failing test which captures the requirements.
GREEN - Implement the functionality by writing just enough code to pass the test.
REFACTOR - Refine/improve the code without adding any new functionality.

And then repeat the whole cycle.

In my opinion, it is always helpful to see the code tested upfront using this painless testing approach. It gives a sense of confidence to us before we start end-to-end tests for the project. Not only this, I believe it gives a fair indication of code coverage, fewer defects and easy maintenance as well.

As important as it is to start with this approach, it is equally important to make modifications, whilst continuing using the approach. Considering a real-world situation, applications change and over time a method may be removed/added/modified. While making any modifications in the code at a later stage in time, we should run all the tests written until now to ensure that we did not break any functionality while adding/modifying code. In my experience, this reduces the testing time by more than half.

It is very important to monitor the code coverage though the development/maintenance cycle of the application. With code coverage, we get to know if any code is not being called by a test. The two scenarios that will be applicable here are: the code is missing a corresponding test, or it is dead code and needs to be removed.

As writing-running-fixing tests consumes time, it is very tempting to put the writing of tests on the back-burner. The most pressing and difficult aspect of this is to keep the discipline and continue the practice. When it comes to shipping robust, high quality products the benefits of this approach are rewarding!

At nsquared we are working using TDD on our latest products. If you want to find out more about how we work, please get in touch.

Tripti Wani

Monday, May 21, 2018

Documentation for today's programmer

When creating documentation, whether for a project, lab, or technical, you would have run into the challenge of needing to move that documentation to different formats: PDF is a popular one, but perhaps also HTML, particularly if your company uses a wiki for such things. At nsquared, we found that this movement of documents can get frustrating, not only because they do not always come across cleanly, but also because if they are different, you then have to maintain a bunch of different documents. Time to solve this, using tools which are freely available: Markdown, Pandoc, and PowerShell.

The solution is reasonably simple. You can still write your document up in your favourite word processor, however, keep the formatting to a minimum (avoid anything more complex than bold, italics, and hyperlinks; also, you can add images, but do not do it in your word processor). Once you have your file ready, save it out as a .docx, so that we can get underway in earnest. The first part covers converting your document to the Markdown format.

Steps to convert from docx to Markdown:
1. Download and install Visual Studio Code.

  • - We will use this to edit your document later, but essentially this will be your go to program very soon.
2. Download and install the Pandoc installer (download the latest Windows ‘x86_64.msi’ file).
  • - Pandoc is a freely available program online, which will handle the conversion of your documents. It supports a host of outputs, including: docx, HTML, Markdown, PDF, latex, and txt, just to name a few.
3. If you are running Windows 10, you will already have PowerShell available to you. This solution is written for Windows, though is transferable to Apple Mac, via the use of Terminal. Once you are ready, launch PowerShell (found by typing 'power' into the search of the Windows menu).
4. You now need to navigate PowerShell to the location of your document. Generally, it will start in your user folder (C:\Users\YourAccount). You can use the 'cd' command, plus the path of your document to get there quickly:

  • - Type: cd 'C:\Users\YourAccount\Documents'.
  • - Make sure to replace the path section with the location of your document (the above uses 'Documents' as that location). 
5. With the above completed, you will notice that the path that PowerShell is using is what you just typed - this means that it is now using this location as the point from which to execute commands.
6. Now it is time to utilise Pandoc. In PowerShell, type: pandoc 'YourDoc.docx' -f docx -t markdown -s -o 'YourNewDoc.md'

  • - Make sure to substitute 'YourDoc' with the name of your current document, and update 'YourNewDoc' to have a name which you want, for your converted file.
  • - If you want to know about the commands available for Pandoc, make sure to visit their documentation page.
7. With that all in place, press the Enter key on your keyboard to run the command.
8. Your document will be converted to Markdown (though retaining the original document, though you will not need it by the end, so do with it what you like).
9. You have successfully converted your document from .docx to Markdown, the next part is to update your document using Markdown.


Steps to update your Markdown document for easy conversion:
1. Open Visual Studio Code. Once open, you will be presented with a (mostly empty) window. This might look familiar in part, if you have used any other Visual Studio program; Visual Studio Code is the light weight version, and it is remarkably powerful, and allows you to easily write in many programming languages.





2. Click File > Open File, and then browse your files for your new Markdown file (we are opening it in VS Code).
3. With your file now open in Markdown, you will notice that it is looking very plain. This is the power of Markdown, it utilises only the most basic of formatting, however, this allows it to easily be converted into other formats.
4. Here is an excellent cheat sheet of how Markdown works. Have it at the ready, for the next few steps.
5. Now you will need to open the preview page (which shows you what your document will look like with Markdown applied). With your document open, navigate to the top right, and click the split window icon, with the magnifying glass in front of it:





6. You will be presented with a panel to the right of your Markdown document, which is showing you what the output will be. You will notice, all the Markdown tags (#/*/---/```) are gone, and just plain text appears, with light formatting.





7. Now, using the cheat sheet as a guide, update your Markdown document, so that it presents how you would like it.
8. With your Markdown complete, close Visual Studio Code, because, you are ready for further conversion!

Steps to convert your Markdown to HTML:
1. Open PowerShell once more, and navigate to your Markdown document:

  • - Remember to use the ‘cd’ command, and the path to your document – cd 'C:\Users\YourAccount\Documents'
2. Once PowerShell is in the same location as your Markdown document, use the following command to convert from Markdown to HTML:
  • - Pandoc 'YourNewDoc.md' -f markdown -t html -s -o 'YourNewWebpage.html'
3. Your document is now in HTML! If you had images and set them up correctly (according to the cheat sheet), they will have come across cleanly, creating a ‘media’ folder along the way, to use with your new webpage.

As you will realise, from now on, you simply need to maintain your Markdown document, and then you can convert it; as mentioned previously, this works for PDF and docx too, so you can always produce those formats if you need them.

This is the start of your Markdown journey, though, expect to continue and go further. Through using Pandoc, and PowerShell, you could put together a PowerShell script to automatically convert your latest Markdown document to HTML, so that you can keep working on your documents, without worrying about the export process. This is an excellent workflow and may help you increase efficiency!

Elliot Moule

Tuesday, May 8, 2018

Bringing your 3D Models into Unity

When you are working across multiple programs in technology, it’s important for any designer to be aware of what file formats are suitable for the program you’re using. There is nothing worse than working long and hard on making something look great in one program, and having an error on it when dragged into another!

3D is lots of fun and very straight forward to make simple models to deck out any scene in Unity, so I’m going to walk you through a simple procedure of exporting for a 3D model into Unity. My preferred 3D modelling software is Autodesk Maya but you can use any you wish, as long as you can export the model into an FBX file.

Why .fbx?
An fbx file is a 3D asset exchange format that is compatible with many 3D tools. It also enables you in most cases to save your materials on the object if desired. As opposed to an OBJ file, your capabilities are much larger.

Getting started:
Jump into Maya and create your model. I have made a simple lamp to use as an example. Note: you want your model to be on the lower side of the poly count. To keep track of the count, select Display > Heads Up Display > Poly Count. The count will appear in the top left corner.




Materials:
You can apply your materials to the object in your 3D software, OR Unity. For the sake of showing you how to do so in Unity I’m going to leave my model without a material in Maya.

A few things to check before exporting:
- It is a good idea to combine your meshes. You can do this by holding the mouse and dragging a box over all the objects and selecting Mesh > Combine.
- The ‘Up Axis’ should be set to ‘Y’.
- Your model is on the ground plane, with all location and rotation values set to 0 in the ‘Attribute Editor’ panel.




Exporting:
Getting your model export-ready is very straight forward. Ensure the alignment is correct and everything is squared-out and facing forward toward the positive Z-Axis. When you’re ready, go to File > Export All. Select FBX under ‘Files of Type’ to export in the correct file format. Name your file and place it somewhere you can easily access when you open up Unity to import the object.




Once you’re in Unity:
Once you’ve got a new or existing scene set up in Unity, simply drag and drop into the Assets panel. Once you can see your model, drag it into your scene. You can then right click in the Project panel and click Create > Material. Now for the fun part! You can drag and drop a PNG if you’d like, or select a colour from the top right Inspector panel. It’s here you can also customise your material by toggling between the X and Y options in the tiling section.




So there you have it!
A basic guide on how to get your 3D objects into Unity correctly as well as a little bit of customisation. Happy designing!

Jacqui Leis

Sunday, April 29, 2018

Adding AI to Mixed Reality

Over the last few months I have had the privilege of helping numerous Microsoft Partners get started with building Artificial Intelligence into their Mixed Reality applications.

It might help you to understand what I mean by Artificial Intelligence, as it is a heavily overloaded term used to describe everything from an algorithm to predict what you might like to buy next, through to a future technology that will run the world and take all of your jobs. For the purpose of this article I will limit the term of AI to describe a set of algorithms to help determine a result of a specific query with a certainty high enough to be useful by the customer making the query. For example, given a sentence spoken by a customer, the algorithm has an 80% (or greater) confidence that the intention of the sentence was to order a specified item for delivery at a given time.



One of the aspects of almost all AI is that software developers are no longer working with clear binary results (1 or 0, on or off), instead, with AI algorithms, the result is a percentage of certainty of correctness, often termed confidence.

Working with this confidence, the application can modify the experience for the customer.
You might be asking why this is interesting for a Mixed Reality application?
With the example I just provided, of understanding the intention of a spoken command, a Mixed Reality application can become far more useful. If you have ever worn a headset, VR, AR, or MR, you will know that the controls for input are limited. Either using hand controllers or simple hand gestures is often not enough to control a complex application or environment. Speech is a great way to enhance the interface. When the speech can be in the form of natural language input that an algorithm can translate into an intention the application can act upon the experience for the customer is greatly improved.

In the one week workshops, the developers learn how to use computer vision services to recognize the objects that a camera is seeing, translate text between languages, understand the intention of a natural language command, and even build their own machine learning algorithm from scratch. The developers then take these lessons and build out a demo or proof of concept application they can take back to their workplace.



One thing that is becoming clear is that while 5 years ago you would have struggled to find things you use every day that utilize some form of AI, in the coming years you will find it hard to find any technology that doesn’t take advantage of some form of AI.

Dr. Neil Roodyn

Monday, April 23, 2018

Experiences with Microsoft’s Azure Face API

In the last few weeks I have been working with Microsoft’s Azure based Face API.

If you have never used the API you might well be surprised by how extensive the information about each face returned by the API can be. Here is just a small part of the information that comes back:
1. The coordinates of the face inside the scene.
2. The amount the face is tilted.
3. A guess of the person’s age.
4. How much the person is smiling.
5. Whether the person is wearing glasses or not.
6. Whether the person has facial hair or not.
7. Whether the person is male or female.
8. A guess at the emotional state of the person.

All the above as well as very detailed information about positions of features in the face can be obtained.

The way in which the API is used has been designed to be very straightforward.

To be able to recognize a face, the Microsoft engine in Azure needs to have some sample images of the face. The set of samples is called the training set, and the project I worked on started by sending a set of images to Azure for each of the people we wanted to recognize later in our project.

When the time came to recognize people, we set up a camera connected to a PC and every few seconds sent the current camera image to Azure asking the Face API to tell us if any faces were in the image.

If a single person walked up to the camera, the response would be that there is one face in the image we had sent. The Face API is quite capable of picking up many faces in a single image (for instance where the image shows a number of people seated around a table).

Once we know there are faces in an image, we need to use a different function in the Azure Face API where we send just the area around a face to Azure and ask whether that face belongs to someone in our training sets. The response we get back is not just a yes/no response, but a probability of how likely it is that the face we sent matches someone. Generally, we would choose the highest probability match (if there is one).

In our project we wanted a PC app to trigger an activity whenever someone the app knew came into range of the camera. In effect we would also know when they had left as we would stop seeing them through the camera.

The Face API made it easy for us to set up the project and begin testing. At that stage, we began to realize it was not all quite so simple.

The first sign was that people who walked past the camera in profile were not recognized. Actually, they weren’t even detected as faces! After some investigation it was possible to determine a list of circumstances that were likely to have an impact on whether someone was going to be matched.

The first step in getting a match as noted above is to detect that there is a face in an image. This step, we discovered can be affected by quite a few things. Here is a partial list:
1. A person’s head should not be turned away from the camera by more than about 45 degrees.
2. If the camera is positioned too far above the mid-line of the face, no face is detected. Similarly, even if the face and camera are at the same level but the person turns their face too far up or looks down too far, no face is detected.
3. If the face is tipped too far from vertical with respect to the camera, a face will not be detected.
4. The mouth should not be covered.
5. The nose should not be covered.
6. Each eye should be visible or at most obscured just by no more than a finger width.
7. Ears, forehead and chin do not need to be visible.
8. Placing a hand against the side of the head or chin does not prevent detection.
9. Beards, moustaches and glasses do not prevent detection.
10. Strong backlighting (e.g. A large window behind a person) can make detection impossible.

Even if a face is detected, the face may fail to match against the training set due to other problems:
1. If the place/camera where the training set was collected is different to the where the recognition is to be done, the success rate in matching may be lowered.
2. If the resolution of the cameras used for training and for recognition are very different, the success rate in matching may be lowered.
3. If the camera resolution is high (e.g. 1920x1080), matching is easily achieved at 2 metres distance from the camera. If the camera resolution is low (e.g. 640x480), matching at 2 metres from the camera becomes difficult.
4. If the facial expression at recognition time is too different to the expression used in the training set (e.g. mouth open at recognition, while the training images all had mouth closed), recognition may fail.

Achieving a reliable result in a project once you know more about the characteristics of the API becomes not just a matter of putting some code together. The project design may need to juggle with the position of the camera, perhaps using more than one camera. Some thought will also need to go into lighting and possibly devising techniques to compensate for perfectly normal face obscuring activities such as people simply turning their heads.

Peter Herman

Wednesday, April 18, 2018

Wood Staining and Finishing for New and Old Timber

A lot of our furniture that we own, in and around the home, is made from wood, although this maybe less so the case in modern times. Often, when old wooden furniture starts looking worn, people will tend to dispose of it, but with just a little bit of time, effort, and some simple tools, you can make your furniture look new again. Alternatively, you might be working on a little project at home that could benefit aesthetically, and functionally, from a coat of stain and varnish. By conditioning, and varnishing, your timber, you helping to it keep its shape and extend its lifetime.

You will need some tools along the way, including sandpaper or sanding blocks (usually from 240GSM up to 150 GSM), a suitable bristle brush, rags, mineral turpentine, empty jars, and tack cloths. Tack cloth is a slightly adhesive cloth that will help pick up any little imperfections like dust that may settle on the wood between coats.


Sanding/preparation process
If you want to recondition existing furniture, or whatever it may be, the first thing you will have to do is remove the existing coats of varnish and stain. If you want to keep the existing colour, then you will simply need to sand off the layer of varnish. I recommend using 180-grit sandpaper to begin with, then make your way down to 240. If you are removing a layer of varnish, using a suitable wet/dry sandpaper and soaking it with water will stop the varnish from sticking to it. If you would like to give it some new colour, keep sanding until you remove the stained layer and get down to the natural colour of the timber. Remember, with everything you do, always work in the direction of the grain: be it sanding, staining, or varnishing. Once you have finished sanding, clean off the dust with a rag, and then clean off again, with the tack cloth.

1. First make sure all the parts are sanded down to ensure a smooth surface before conditioning. Depending on condition of the timber start with a 160-grit sandpaper and finish on a 240 grit
2. Wipe off the surface with a rag and tack cloth to remove any dust
3. Mix wood conditioner using a clean paddle-pop
4. (Optional) Apply wood primer on all surfaces of the wood with a flood coating, wait 2 hours before flipping over and applying on other side
     a. Wait at least 6 hours before staining


Staining
Next, it is time to stain the timber if you have chosen to do so. If you would like a natural finish, or to keep the previous colour, you can skip this and go straight to varnishing. There are water based and oil based stains and varnishes, but stick to using one type for the stain and the varnish. There are also stain and finish (varnish) cans you can buy, which will also varnish your timber at the same time, but you may also need to do extra coats of varnish if you feel that you have reached the desired colour in the wood. Before you begin, make sure you are working in a well-ventilated environment that won’t be affected by the stain. Stain is near impossible to remove from clothes, and difficult to remove from skins, so wear gloves, and work somewhere that won’t be affected by any splashing.

1. Mix wood stain using a paddle-pop, or if the stain has not been used in a while shake vigorously an hour before use
2. Apply wood stain on top side and end grains on the sides
3. Wipe off any stain that has dripped onto the underside
4. Wait 5-10 minutes
5. Wipe off wood stain using a rag. Ensure this is done in a circular fashion (wax on, wax off)
6. Wait 2 hours until stain is touch dry
7. Flip wood onto the other side
8. Apply wood stain on the new topside
9. Wipe off any stain that has dripped down onto the sides, but take care not to take off any on the top’s edges
10. Wait at least 6 hours before doing another coat
11. Repeat this process so there are 2-3 layers of staining done (3 recommended)


Varnishing
Now for the varnishing, make sure you have wiped the surface off with the tack cloth. If you’re working with mostly flat surfaces, start off with using a brush, and try and keep the coats even on the surface, but don’t go back and try to touch up the varnish if you spot something at the end. If there’s an imperfection, wait for it to cure and sand it off, and try again on the next coat. You will want around 2-3 coats with the brush and finish off with spray lacquer (of the same type); this will help you get a nice smooth finish. If you’re working with furniture with contoured surfaces you may find it easier to exclusively use spray lacquer. Leave the brush to soak in mineral turps between layers, and make sure the surface is touch dry before flipping it over and doing the other side.

1. Mix polyurethane clear coat while taking extreme care not to introduce and air bubbles
2. Apply coat to the top and sides, but do not go back over areas that may not be exactly even; doing this afterwards may compromise the finish
3. Wipe off any lacquer that has dripped onto the bottom side
4. Wait 2-3 hours
5. Flip wood back over
6. Apply coat on new topside
7. Wait at least 6 hours
8. Use 180+ GSM sandpaper and sand the surface moderately and even out any blotches
9. Use 240+GSM and be gentle after the first coat, careful not to remove any of the stain!
10. Repeat until 2-3 coats have been applied
11. Sand down once more
12. Wipe off dust with tack cloth
13. Apply an even coat of spray lacquer on top
14. Wait 2-3 hours
15. Flip over
16. Spray even coat on other side
17. Repeat until at least 3 layers of spray lacquer have been applied; until surface is uniform and sand lightly between each layer
18. OPTIONAL coat the table surface with a layer of beeswax timber finish
19. Rub off beeswax with a rag in a circular fashion once it is dry


That concludes this condensed guide on how to complete wood staining and finishing on timber for new timber products, or give new life to your furniture at home. As always, remember to work in a well-ventilated space, and if you don’t want something to get dirty from splashes or spills, cover it up or work in a different area.

Charlie Ho Si

Sunday, April 15, 2018

Business Applications for Mixed Reality


The vast majority of virtual, augmented and mixed reality software in the market right now is for entertainment.

A valid case can be made saying that games, social interactions and 360 videos are the reason for the success of these new technologies within the consumer market, but what I have yet to see, are widespread applications that leverage those technologies to benefit productivity and collaboration in the workplace, while also reducing costs for an enterprise.

Let me give you some examples:
If a person wants to learn how to drive a car, the process begins with a test and then practising driving. Maybe starting in a parking lot of a factory on Sunday, like I did when I was younger, to minimise risks of accidents and then carefully moving to the road. What if driving schools had an Immersive driving experience, where the driver can learn the movements, how to control the car, in complete safety. The driver could also be challenged to drive in different weather conditions, like snow, rain and fog, some of which might not be possible to depending on where you live, and all in total safety. Nothing will replace the real experience, by all means, but the driving school could benefit by reducing the risk to staff and students, reducing the cost of insurance, and car usage, while improving the learning experience.



Let me take you through another example, something I have been secretly working on for some time, at nsquared, and from looking at the most recent results, we are now confident to talk about it. We call it ‘nsquared screens’. It’s an Immersive application, replicating an environment very similar to control rooms which they have at NASA, airport control towers, stock market trading offices and mall security rooms. Those environments can be very pricey to set up. We have replaced the hardware setup, and recreated it as an immersive application, where you can have the data displayed in multiple “floating screens”. Not only does it become much more affordable to create and sustain a control room of this kind, but it would also come with you when you travel for business.



Mixed reality is compelling, and can be incredibly entertaining, however, if you are as passionate as we are about this technology, imagine the ramifications that Mixed Reality can have improving your professional life and the productivity of your enterprise. At nsquared we surely are in a premium position to make this happen.


Stefano Deflorio

References
Northeast Guilford High School (2018). Distracted Driving Awareness Event. [image] Available at: https://www.flickr.com/photos/ncdot/34126511795 [Accessed 16 April 2018].

Stefano Deflorio. (2017). nsquared screens.

Microsoft (2018). Microsoft Mixed Reality [image] Available at: https://winblogs.azureedge.net/win/2016/12/Windows10-MR-Devices-1024x576.jpg [Accessed 16 April 2018].

Thursday, April 12, 2018

Artificial Neural Networks

When you read about artificial neural network (ANN), the first thing you learn is that an artificial neural network is like the human brain: it can be trained to perform a certain task. Like how our brain is composed of neurons, that process information received either from the outer world or from other neurons, ANN has artificial neurons that work the same way. In the case of a human brain, when a person touches a kettle of boiling water, input is the touch sensation, and output is a signal from the brain, to remove the hand from the kettle. Similarly, for an ANN that is trained for image recognition, when input is an image of a furry puppy, the output is the word “puppy" or “dog", depending on how it was trained.

Figure 1 - Artificial Neural Network (ANN)


Figure 2 - Biological Neuron

Although, in recent years, ANN has been proven to achieve exceptional results in a particular task, it is yet to reach the capabilities of a human brain, where a single network performs multiple tasks. An ANN is created and trained for a purpose, and with enough data and training, no doubt, it can outperform human brains in executing a task. AlphaGo can be taken as an example, which is the first computer program to defeat a human world champion of the game Go. Other such tasks, where ANN has shown better results, include image and object recognition, and voice recognition. However, the challenge for ANN lies in training one network that can learn and carry out multiple tasks. It would be absolutely amazing to see an ANN that is powerful enough to recognise a person, learn to play computer games, and write songs as well, and I believe, that is the next stepping stone for ANN.

At nsquared, we are excited to be working with cognitive services and machine learning systems, to improve the way we work together better. For an experimental project that I worked on, I created a UWP app that produces drawings of objects using Tensorflow, an open-source machine learning library. In this project, I worked with sketch-rnn, which is a neural network, based on a type of ANN, called recurrent neural network. I used pre-trained models, that were available online, as well as experimented with training my own ANN using existing datasets.

Sabina Pokhrel

References
Burnett, C. (2018). Artificial neural network. [image] Available at: https://commons.wikimedia.org/wiki/File:Artificial_neural_network.svg [Accessed 28 Mar. 2018].

DeepMind. (2018). AlphaGo | DeepMind. [online] Available at: https://deepmind.com/research/alphago/ [Accessed 28 Mar. 2018].

Looxix (2018). Neuron - annotated. [image] Available at: https://commons.wikimedia.org/wiki/File:Neuron_-_annotated.svg [Accessed 28 Mar. 2018].

Steinberg, R. (2018). 6 areas where artificial neural networks outperform humans. [online] VentureBeat. Available at: https://venturebeat.com/2017/12/08/6-areas-where-artificial-neural-networks-outperform-humans/ [Accessed 28 Mar. 2018].

GitHub. (2018). tensorflow/magenta. [online] Available at: https://github.com/tensorflow/magenta/tree/master/magenta/models/sketch_rnn [Accessed 12 Apr. 2018].

Wednesday, April 11, 2018

Designing for Mixed Reality

At nsquared, we are working very closely with Microsoft to build training material to help you build the best Mixed Reality (MR) applications. One of the topics which we cover in our training material, is how to design for MR, and knowing when to optimise your 3D assets.

Designing for MR can be a daunting challenge, but crucial to the successful performance of an application. The main goal of optimisation, is finding a balance between beautiful 3D assets and making sure assets do not hinder the performance of an application. Optimisation can be considered and applied when modelling, UV mapping, texturing and exporting 3D assets, but it is important to understand when to use the correct method.

Generally, when building 3D assets for game engines, such as Unity, it is important to keep the polygon count as low as possible. The lower the polygon count, the more efficiently an application can run, reducing lag. This is even more important when building 3D assets for MR as the performance requirements are higher.

Before reducing the polygon count of your 3D assets, there are a few key questions you need to keep in mind. This ensures that you are creating a well optimised application, while ensuring that the user experience is maintained.

Ask yourself the following questions:


The key is to understand what needs to be detailed, and what does not. Knowing which assets need more detail will help you prioritise the assets that need the greatest number of polygons, so time and effort is not spent on assets of lower importance.

Tuesday, April 10, 2018

Bots Everywhere!


As a kid, I used to wonder if I could make an intelligent clone of myself: which would look like me, sound like me and do all the work which I never enjoy doing or find boring to do. As I grew up and entered the professional space, "scheduling meetings" was one of them. Although Outlook was helpful to show the available times for everyone, I wished to not even have to switch my laptop ON, for this. I know I am sounding too lazy, but I am what I am :D.

At nsquared Solutions, the place where I work on amazing tech, and produce and develop awesome outcomes, I got an opportunity to turn my wish to reality: working on automating the "scheduling meetings", into a fun and interactive application. I am glad to mention that I used the Microsoft Bot Framework along with my C# skills, as I wanted the Bot to be up and running as soon as possible. Though, the Microsoft Bot Framework also works with Node.js. It supports a variety of platforms and can be hooked up with channels like Skype, Teams, Facebook, and Slack, to mention a few. Coupled with LUIS (Language Understanding Intelligent Service), the Bot gets more power-packed as it binds the natural language and creates models which improve on usage.

Whenever I talk about Bots, I always think of Sci-fi movies with super complex and intelligent digital beings. When I started to work on the Bot, my expectation was for it to be intelligent and get even smarter over time. I expected it to deliver new experiences, removing barriers between people and technology. And I wanted it to help us make sense of the huge amount of data that is all around us, to deliver those better experiences. The success in adopting AI to solve real-world problems hinges on bringing a comprehensive set of AI services, tools and infrastructure to every developer, so they can deliver AI-powered apps of the future that offer unique, differentiated and personalised experiences. As a summary from my experience, I can say that it is simple for existing developers without deep AI expertise to start building such personalised data-driven app experiences of the future.

Tripti Wani

Thursday, March 15, 2018

Augmented Reality: what could be, should be



Tony Stark operates his Iron Man suit from his office, Aloy from Horizon Zero Dawn overrides Tall Necks, and Shuri drives a car remotely in Black Panther. Pok√©mon GO overlays your environment to show creatures, and Vuforia allows you to see and interact with digital content. All of these are powered by Augmented Reality, though the first three fictional examples are a bit different, aren’t they: they physically influence the real world, not just overlay it, or provide additional data. This is what needs to change about Augmented Reality development.

Just as the remote control revolutionised the use of the television, augmented reality should revolutionise our lives: allowing us to potentially remote control everything! Wave your hand at the TV to change the channel, or grip and turn your hand mid-air towards your air conditioner to turn it up or down. These kinds of gestures can surpass things like voice control because they are universal – no translation needed!

In Sydney, Australia, at nsquared solutions, I work with the team to create some truly amazing applications and experiences. We turn what was the ‘future’, into the ‘modern’, and show businesses and people how they can work together better, by harnessing technology to its full purpose, so to make our lives easier, and allow people to be more collaborative.





More recently I’ve been working on an application called nsquared space planner, which is usable with both the Microsoft HoloLens, and the new Mixed Reality Immersive Headsets. Though, it provides the user with a different experience, depending on their device. The core premise to the app is to help people design and plan their spaces. The app provides you with various furniture options (over forty), which can be placed in your space, so that you can see what it might look like in your space, in actual scale. The included tools allow for changing the various material styles of the furniture, and the ease of placing them in the world, using position and rotation. In this way, it helps people be more productive, by taking away the logistical headaches that would otherwise be associated with such a task.

What if you could take it a step further though, and whilst overlaying your surroundings, also interact with the real world? Make that lamp brighter or turn the stereo up! Have your Italian friend speak to you and receive live translation. Or mute your smartphone, because you’re too busy having fun – I mean, planning and designing your space.

The good news is that this is all very possible. We are already doing it at nsquared, in Sydney! Object shape, and image, recognition is available to devices like the HoloLens. So is translation of hundreds of languages, and the ability to interact with your lamp remotely.

So, the next time you are watching Tony Stark transform his environment with augmented reality, you will be able to also, and it won’t be so futuristic. From now on, let’s aim to work together better. 


Elliot Moule

References
Coogler R, Black Panther (Marvel Studios 2018).
Favreau J, Iron Man (Marvel Studios 2008).
nsquared space planner, (nsquared solutions, 2018).
Tatsuo N, Pokemon GO (Niantic 2016).
van der Leeuw M, Horizon Zero Dawn (Guerilla Games 2017).
Vuforia (PTC Inc 2011).