Quantcast
Channel: Intel Developer Zone Articles
Viewing all 533 articles
Browse latest View live

How to package a HTML5/web-app for AppUp using node-webit and WixToolset

$
0
0
Introduction

This article demonstrates one of many ways to package a HTML5/web app for distribution for Windows. The result will be a Windows MSI installer file that can install silently. The web app is packaged with node-webkit that provides the browser environment for the app. The beauty is your app will allways run in the same constant environment and does not have to cater for multiple browsers.

Important to know is all installer files submitted to AppUp must be digitally signed: http://software.intel.com/en-us/articles/Application-Signing-FAQs. I recommend to use our app signing tool because it is really easy to use and does not require any additional SDK installation: http://software.intel.com/en-us/articles/app-signing-tool.

Step one – Setting up the environment
  • Download and unzip the attached file WixToolsExample.zip anywhere you want. It will setup a directory structure like this:

appfilesall your files will go here
outputthe MSI file will be stored here: your_app_installer.msi
app.wxsthe input file for the WIXtoolset package
heatGenerator.batwill parse appfiles and provide input for app.wxs
make_app_installer.batwill build the MSI file
Step two ­­– Your web App and node-webkit:
  • Download node-webkit and unzip it to appfiles
  • Copy your web app and its assets to the same directory (basically everything that needs to be installed on your customers PC)
  • Create a package.json manifest file as described here: https://github.com/rogerwang/node-webkit/wiki/Manifest-format
  • Start nw.exe, test and debug your app until satisfied
Step Three – Get WixtoolsetStep Four – Create the MSI file
  • Run heatGenerator.bat. It will create a file AppIncludeFile.wxs
  • The batchfile uses heat.exe to scan the appfiles directory and its subdirectories and create commands to add all files found to the installer
  • The result AppIncludeFile.wxs describes everything that is deployed to your customers PCs, but unfortunately it has a few lines too many.
  • Open AppIncludeFile.wxs in a text editor and make the following changes:
  • Replace the first three lines:
<?xmlversion="1.0"encoding="utf-8"?><Wixxmlns="http://schemas.microsoft.com/wix/2006/wi"><Fragment>

With (the capital 'I' is important!)

<Include>
  • Delete all lines like any of these lines:
<Fragment></Fragment>
  • Look for the component tag that includes nw.exe (our main executable) and delete it (the main executable and is already added in app.wxs and double entries are not allowed):
<ComponentId="cmpD418EA5BAB31203D7CA261BB382B8410"Directory="INSTALLDIR"Guid="*"><FileId="fil4FFD5BB292D84D37D9237A44A0B55BA8"KeyPath="yes"source="$(var.MySource)\nw.exe"/></Component>
  • Replace the very last line in the file
</Wix>

With

</Include>
  • Save AppIncludeFile.wxs and open the file app.wxs  in a text editor and change the following lines:
<!-- make your changes here --><?define MySource = .\appfiles ?><?define MainEXE = nw.exe ?><?define MyIcon = icon.ico ?><?define Manufacturer = MyCompany ?><?define AppName = MyApp ?><?define AppArguments = -start ?><?define AppVersion = 1.1.0.0 ?><!-- the following GUID must change for each product update --><?define ProductGUID = 11111-11111-11111 ?><!-- the following GUIDs remain the same for all versions --><?define UpgradeCode = 22222-22222-22222 ?><?define GUID01 = 33333-33333-33333 ?><!-- you can get GUIDs from here: http://www.guidgenerator.com/online-guid-generator.aspx--><!-- endof changes -->
  • MySource– you shouldn’t need to change this because you copied your app to this directory in step two
  • MainEXE– don’t change this, it needs to point to the main executable and that is nw.exe for node-webkit
  • MyIcon– this is the name of your icon file stored in MySource. Must be a .ico file. If you have a png and need an ico try http://convertico.org/image_to_icon_converter/ The site will convert it for you and worked well for me
  • Manufacturer– Type in your company name. The directory will be named what you enter here, so be sure not to enter anything that cannot be a directory name! It will also appear in some Windows menus such as “uninstall program”
  • AppName– this is also used as a directory name and the name for the desktop, startmenu shortcuts and uninstall program
  • AppArguments– any command line arguments your app needs when started. They will be part of the shortcuts created
  • You need three GUIDS! The first changes with every update you build, the other two remain the same for the lifetime of your app.
  • Save app.wxs and start make_appup_installer.bat
  • If all went well you will see an output like this:

  • If you see a different output, especially a higher error count than one in the
     “--- MAKEOUTPUT.LOG: 1” line then something went wrong!

Done, now you have an MSI file in the folder output that is named your_app_installer.msi.

You can rename and copy the MSI file to another PC and test the installation, run the app and test the uninstall. Remember: before you can upload this for validation you need to sign the MSI file!

Many Thanks for reading!

Additional Sources:

There are a number of websites that I used to write this article which are as follows:

node-webkit is an open source app runtime based on Chromium and node.js. You can write native apps in HTML and Javascript with node-webkit. It also lets you to call Node.js modules directly from DOM and enables a new way of writing native applications with all Web technologies. It's created and developed in Intel Open Source Technology Center.

The Windows Installer XML (WiX) is a toolset that builds Windows installation packages from XML source code. The toolset supports a command line environment that developers may integrate into their build processes to build MSI and MSM setup packages. WiX is an open source project, originally developed by Microsoft and maintained by Rob Mensching

  • The Windows Installer XML (Wix) website and the documentation provided:

http://wixtoolset.org/


  • AppUp
  • html5
  • windows
  • deployment
  • msi
  • installer
  • wixtools
  • node-webkit
  • webkit
  • node.js
  • Developers
  • Intel AppUp® Developers
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • HTML5
  • Windows*
  • HTML5
  • Advanced
  • Beginner
  • Intermediate
  • Development Tools
  • Microsoft Windows* 8 Desktop
  • URL

  • Developing Enterprise Windows* Store Apps using RESTful Web Services

    $
    0
    0

    Download Article

    Download Developing Enterprise Windows* Store Apps using RESTful Web Services [PDF 596KB]

    Abstract

    This article discusses a way to create Enterprise Windows* Store apps that are connected to RESTful web services. It also includes a case study on implementing a network healthcare-themed application in C# that consumes a RESTful web service.

    Overview

    Since the World Wide Web (WWW) was invented at CERN in 1989, it has quickly become the real distributed computing platform that scientists and engineers have dreamed about for decades. This platform acts as a powerful and scalable storage and transportation tool for data. By mimicking John Gage’s famous phrase “the network is the computer,” we have seen “the Web is the computer” gradually become a reality in both the consumer and the business computing spaces. In 2000, Roy Fielding introduced the Representational State Transfer (REST) software architecture in his PhD dissertation. Since then, RESTful web services have become the mainstream design principle in implementing connected apps. In this article, we will use a healthcare line of business application case study to discuss how to implement Windows Store applications that consume RESTful web services.

    RESTful Web Services

    In traditional computer programming paradigms, we use CRUD to represent the basic operations when dealing with data in persistent storages, such as databases. By definition, the CRUD operations are Create, Read, Update, and Delete. For decades, Structured Query Language (SQL) has been the programming language used to interact with data in a database. The basic SQL queries easily map to the CRUD operations:

    Table 1. CRUD operations in SQL

    OperationSQL query
    CreateInsert
    ReadSelect
    UpdateUpdate
    DeleteDelete

    HTTP (HyperText Transfer Protocol), the foundation of the World Wide Web, serves as the interface between human, computer, and the Web. We can also see how the basic HTTP methods map to the CRUD operations:

    Table 2. CRUD operations in HTTP Methods

    OperationHTTP Method
    CreatePOST
    ReadGET
    UpdatePUT / PATCH
    DeleteDelete

    The same way SQL is used to interact with items stored in databases, such as the tables and the individual data items, HTTP is used to interact with items flowing in the Web. The items are called “resources.” HTTP uses URIs (Uniform Resource Identifiers) to reference resources in the Web. We often use “urls” (or “web addresses,” or “web links”) to refer to a web resource, for example, http://software.intel.com. A URL is technically a type of URI.

    In Chapter 5 of his dissertation, Roy Fielding defines the REST-style software architecture by adding several constraints to the Web: Client-Server, Stateless, Cache, Uniform Interface, Layered System, and Code-On-Demand. A web service is called RESTful if it satisfies these architectural style constraints.

    In this article, we will use a case study to show how to use Visual Studio* 2012 to create a Windows Store app that consumes RESTful web services.

    A Healthcare Enterprise Windows Store App

    Like we did in several other articles in this forum, we will build the case study around a healthcare Windows Store enterprise application. Some of the previous articles include:

    We will extend the same application to access a RESTful web service.

    The application allows the user to login to the system, view the list of patients (Figure 1), and access patient medical records, such as profiles, doctor’s notes, lab test results, vital graphs, etc.

    By extending the app to consume web services instead of using a local database, we enable the app to share data between the devices connected to the Web. Figure 1 is a screenshot of the Healthcare cloud-enabled application that this article is based on.



    Figure 1: The Patients view displays the list of all patients. Selecting an individual patient provides a new view with further medical record details.

    Porting an existing iOS* Web Service App

    For developers with an iOS background, the APIs you probably used to create web service applications included Cocoa Touch* framework’s NSURLConnection or NSRLRequest to directly interact with web resources. A similar URL connection approach will be used in this case study, specifically the C# HttpClient class.

    By working through the case study in this article, you should see similarities between the APIs and get a sense of how HTTP requests are done in a Windows Store app.

    Constructing a RESTful Web Service Windows Store App

    In the following sections we will walk through the code involved to connect to the web service, retrieve data, and then present the information in a healthcare-themed Windows Store app.

    A RESTful Web Service

    There are many different options for creating and deploying a web service that integrates with this type of cloud-based application. If you are porting a cloud-based application to a Windows Store app, you probably already have a web service solution up and running.

    For the sample application outlined in the following sections, a simple RESTful ASP.Net MVC Web Application project was created for testing. It uses a local SQLite DB as the source of data and can be created easily with the Visual Studio* project templates. This runs using the Visual Studio integrated version of IIS and can be deployed to a service that supports IIS such as Windows Azure*.

    Another possible solution would be to use a Python script to serve up a SQLite DB. Additional resources on possible web service options include the following:

    Further details on how to implement and deploy the web service is outside the scope of this article.

    Set up the Project

    First we need to set up a project. In Visual Studio 2012, one straightforward way to display a collection of information is to use the GridView control. So we will set up a new project and start with a blank template and then add to it a GridView control. The Grid App project could also be used and includes additional logic and views for navigation between screens.



    Figure 2: The Add New Project dialog of the connected healthcare Windows* Store app project in Visual Studio* 2012

    Choose an Item Page

    Next, add a new item to the project and choose an Item Page. This step will add several dependent files to the project in addition to the new grid layout page that contains the GridView control we will use.



    Figure 3: The Add New Item dialog of the connected healthcare Windows* Store app project in Visual Studio* 2012

    Add a View Model Class

    To get started writing code, we need to add a new class named PatientViewModel. We will follow the MVVM design pattern, which you can read about in detail here. This view model class is responsible for a couple of things. It exposes the collection of patients to the GridView control. It also contains the definition and data for a single Patient object pulled from our web service data source. For the sake of brevity, it also contains the definition of the data model. In our example this data source is a RESTful web service that makes an HTTP Get request to retrieve the patient information. The code in Sample Code 1 provides all of class definitions that we will use in the remainder of the view model. The class patient_main holds the data for a single patient that we show how to pull from the web service. The class ArrayOfPatient lets us easily move the entire collection of patients into our data structure. PatientViewModel exposes only three fields, Title, Subtitle, and Description, that we will eventually bind to the XAML view.

    namespace PatientApp
    {
       // Data Model for patient
       public class patient_main
       {
           public int id { get; set; }
           public string lastname { get; set; }
           public string firstname { get; set; }
           public string gender { get; set; }
           public string dob { get; set; }
           public string ssn { get; set; }
           public string status { get; set; }
           public string lastvisit { get; set; }
           public string insurance { get; set; }
       }
       
       // There are some extra attributes needed to deal with the namespace correctly
       [System.Xml.Serialization.XmlTypeAttribute(AnonymousType = true, Namespace =     “http://schemas.datacontract.org/2004/07/PRCloudServiceMVC”)]
       [System.Xml.Serialization.XmlRootAttribute(Namespace = “http://schemas.datacontract.org/2004/07/PRCloudServiceMVC”, IsNullable = false)]
       public class ArrayOfPatient
       {
           [System.Xml.Serialization.XmlElementAttribute(“Patient”)]
           public patient_main[] Items;
       }
    
    
       class PatientViewModel : BindableBase
       {
    
           private string title = string.Empty;
           public string Title { get { return title; } set { this.SetProperty(ref title, value); } }
    
           private string subtitle = string.Empty;
           public string Subtitle { get { return subtitle; } set { this.SetProperty(ref subtitle, value); } }
    
           private string description = string.Empty;
           public string Description { get { return description; } set { this.SetProperty(ref description, value); } }
        }
    

    Sample Code 1**

    Retrieving Data from the View Model

    The following source in Sample Code 2 contains the method that the view will use to retrieve the list of patients from the view model. The bindable patient data is stored in an ObservableCollection and allows the loading of patient data to continue in the background while this method returns immediately to the UI.

          private static ObservableCollection<PatientViewModel> _patients = null;
    
           // Retrieve the collection of patients
           public static ObservableCollection<PatientViewModel> GetPatients()
           {
               // create a new collection to hold our patients
               _patients = new ObservableCollection<PatientViewModel>();
    
               // start the task of retrieving patients from the service
               LoadPatientDataFromService(_patients);
    
               // return the collection that will be continue to be filled asynchronously
               return _patients;
           }
    

    Sample Code 2**

    LoadPatientDataFromService is where the actual work is done, and this work is done asynchronously. The function itself does not return any data, but instead adds patient objects to the collection. When a new patient is added to the collection, the XAML View that is bound to the collection is automatically notified and updates the view accordingly. LoadPatientDataFromService uses the async keyword to indicate that this method is allowed to return to the calling function while it waits for work to be completed inside.

    Sample Code 3 shows the contents of this method and starts the process of making and responding to the HTTP request.

       async private static void LoadPatientDataFromService(ObservableCollection<PatientViewModel> patients)
       {
           // The serializer to deserialize XML into an array of data objects
           XmlSerializer ser = new XmlSerializer(typeof(ArrayOfPatient));
    
           // make the call to the webservice to retrieve a stream of XML
           Stream rawXml = await GetPatientsRawXmlDataFromWebServiceAsyns();
    
           // Create an xml reader to feed to the desrializer
           XmlReaderSettings readerSettings = new XmlReaderSettings() { Async = true, CloseInput = true };
           using (XmlReader reader = XmlReader.Create(rawXml, readerSettings))
           {
               // Deserialize the xml stream into an array of patient data
               var patientarray = ser.Deserialize(reader) as ArrayOfPatient;
    
               // Create our collection of view model patient objects
               foreach (var patientdata in patientarray.Items)
               {
                   // Use some of the data elements to populate the view model
                   var p = new PatientViewModel()
                   {
                       Title = patientdata.firstname + "" + patientdata.lastname,
                       Subtitle = "Last Visit: " + patientdata.lastvisit,
                       Description = "Date of Birth: " + patientdata.dob + " Gender: "+ patientdata.gender 
                   };
                   patients.Add(p); // add this to the collection - view will automatically get notified
               }
           }
       }
    

    Sample Code 3**

    For our example we ask the web service for XML as the response data format and, with the response, deserialize the data into an array of patient data. The serializer object does this for us once we have set up the types correctly with the correct namespace. See the definition of ArrayOfPatient for the attributes needed for this example to work with the namespace correctly.

    GetPatientsRawXmlDataFromWebServiceAsyns performs the next critical task and is given below in Sample Code 4. Here, we are using localhost and port 52358 as our URI since we are locally hosting the web service in a separate Visual Studio project. 52358 just happens to be the default port for this project.

       // Makes the HTTP GET request and returns a stream of XML response data
       async private static Task<Stream> GetPatientsRawXmlDataFromWebServiceAsync()
       {
           // construct URI from our session setting
           UriBuilder uriBuilder = new UriBuilder(“http://localhost:52358/”);
           uriBuilder.Path = “api/Patients”;
    
           // construct an HttpClient object to make the request. Use a custom client handler
           // to request XML instead of txt
           HttpMessageHandler handler = new WebProcessingHandler(new HttpClientHandler());
           HttpClient client = new HttpClient(handler);
    
           // Make the GET request
           HttpResponseMessage response = await client.GetAsync(uriBuilder.Uri);
           if (response.IsSuccessStatusCode)
           {
               // read the result
               return await response.Content.ReadAsStreamAsync();
           }
           return null;
       }
    

    Sample Code 1**

    The .Net class WebRequest is not available to Windows 8 Store apps, instead we must use an HttpClient object. The custom message handler is shown in Sample Code 5. The only purpose of using the custom handler is so that “application/xml” can be put in the request header. If this isn’t done, JSON-formatted text would be the result from the request instead of XML. JSON deserialization is just as easy as using XML and the DataContractJsonSerializer class handles the deserialization. Two asynchronous calls, GetAsync and ReadStreamAsync, are invoked to perform the work for making the request and reading the response.

    // Custom Request Message handler
    private class WebProcessingHandler : MessageProcessingHandler
    {
        public WebProcessingHandler(HttpMessageHandler innerhandler)
            : base(innerhandler)
        {
        }
    
        protected override HttpRequestMessage ProcessRequest( HttpRequestMessage request, System.Threading.CancellationToken cancellationToken)
        {
            if (request.Method == HttpMethod.Get)
            {
                // Request XML instead of txt if this is an Http GET
                request.Headers.Accept.Add(new MediaTypeWithQualityHeaderValue("application/xml"));
            }
            return request;
        }
    
        protected override HttpResponseMessage ProcessResponse(HttpResponseMessage response, System.Threading.CancellationToken cancellationToken)
        {
            return response;
        }
    }
    

    Sample Code 2**

    That’s it, GetPatients is all set to make a call to the cloud and asynchronously retrieve our patient data.

    Binding the data in the View Model to the View

    Next, we will take a look at how the data in the view model is bound to the view with only having to write one more line of C# code. Sample code 6 below contains the definition of the GridView control in the XAML layout page. These values are all default except for changing the item template to Standard500x130ItemTemplate. These templates are all defined in StandardStyles.xaml and using this predefined item template allows it to automatically bind to the properties Title, Subtitle, and Description within the ViewModel class. In addition to the GridView control, the CollectionViewSource definition for itemsViewSource is shown and is the object we will eventually bind the Patient collection to. The GridView control sets the itemsViewSource as the source of data with the XAML line ItemsSource="{Binding Source={StaticResource itemsViewSource}}".

    <!--Collection of items displayed by this page -->
    <CollectionViewSource
    x:Name="itemsViewSource"
    Source="{Binding Items}"/>
    
    <GridView
           x:Name="itemGridView"
           AutomationProperties.AutomationId="ItemsGridView"
    AutomationProperties.Name="Items"
    TabIndex="1"
    Grid.RowSpan="2"
    Padding="116,136,116,46"
    ItemsSource="{Binding Source={StaticResource itemsViewSource}}"
    ItemTemplate="{StaticResource Standard500x130ItemTemplate}"
    SelectionMode="None"
        IsSwipeEnabled="false"/>
    

    Sample Code 6**

    Here is what the Standard500x130ItemTemplate item template definition looks like. Notice the three text blocks with the Binding keyword.

    <!-- Grid-appropriate 500 by 130 pixel item template as seen in the GroupDetailPage -->
       <DataTemplate x:Key="Standard500x130ItemTemplate">
           <Grid Height="110" Width="480" Margin="10">
               <Grid.ColumnDefinitions>
                   <ColumnDefinition Width="Auto"/>
                   <ColumnDefinition Width="*"/>
               </Grid.ColumnDefinitions>
               <Border Background="{StaticResource ListViewItemPlaceholderBackgroundThemeBrush}" Width="110" Height="110">
                   <Image Source="{Binding Image}" Stretch="UniformToFill" AutomationProperties.Name="{Binding Title}"/>
               </Border>
               <StackPanel Grid.Column="1" VerticalAlignment="Top" Margin="10,0,0,0">
                   <TextBlock Text="{Binding Title}" Style="{StaticResource TitleTextStyle}" TextWrapping="NoWrap"/>
                   <TextBlock Text="{Binding Subtitle}" Style="{StaticResource CaptionTextStyle}" TextWrapping="NoWrap"/>
                   <TextBlock Text="{Binding Description}" Style="{StaticResource BodyTextStyle}" MaxHeight="60"/>
               </StackPanel>
           </Grid>
       </DataTemplate>
    
    

    Sample Code 7**

    To make the association that the list of Patients we retrieve should be used in this view, we just need a single line. The source below goes in the code behind C# file for the XAML view and makes the call to our previously described GetPatients() method.

    Protected override void LoadState(Object navigationParameter, Dictionary<String, Object> pageState)
    {
        this.DefaultViewModel["Items"] = PatientViewModel.GetPatients();
    }
    

    Sample Code 8**

    One of the greatest advantages of this binding is that no additional code is needed to handle events for when a Patient is added or removed from the collection. This is automatically taken care of for you by the ObservableCollection and the XAML View.

    After this data binding step is done, we will see a grid view of mock patient data as shown in Figure 4.



    Figure 4: Screenshot of the sample code app with mock patient data and default styling

    With additional styling, a professional look can be easily achieved, see Figure 1 at the start of this article.

    Summary

    We have discussed the RESTful software architecture and how to apply the development principles in Windows* Store Enterprise applications. From our case study we see the Windows 8 Runtime provides a good foundation for creating web service-based applications.

    About the Authors

    Nathan Totura is an application engineer in the Intel Software and Services Group. Currently working on the Intel® Atom™ processor-enabling team, he helps connect software developers with Intel technology and resources. Primarily these technologies include tablets and handsets on the Android*, Windows 8, and iOS platforms.

    Miao Wei is a software engineer in the Intel Software and Services Group. He is currently working on the Intel® Atom™ processor scale enabling projects.

    Copyright © 2013 Intel Corporation. All rights reserved.

    *Other names and brands may be claimed as the property of others.

    **This sample source code is released under the Intel OBL Sample Source Code License (MS-LPL Compatible), Microsoft Limited Public License, and Visual Studio 2012 License.

  • Developers
  • Apple iOS*
  • Microsoft Windows* 8
  • Windows*
  • Enterprise
  • Healthcare
  • Porting
  • URL
  • Camera Usage and Features, from iOS* to Windows* 8 Store App

    $
    0
    0

    Download Article

    Download Camera Usage and Features, from iOS* to Windows* 8 Store App [PDF 1.26MB]

    Abstract

    Modern applications take advantage of camera features to enable new types of usage models and enhanced user experiences. In this article we will cover how to use the camera feature in a Windows* 8 Store app to take pictures and how to convert the picture into base64 encoding, which can then be transmitted as JSON data over the network to a backend server or persisted into a local database. We will also discuss how iOS* camera usage compares to Windows 8 and some tips on migrating your code.

    Contents

    Overview

    The camera is one of the most used features in mobile device applications. Users can capture photos or record videos with different configuration settings. All of the modern OS platforms like iOS and Windows 8, provide APIs and platform services to make camera usage seamless for the end user and easy to use for the programmer. This gives users a well-known user interface to interact with the camera, common across all apps. Some apps may create a fully customized camera usage depending on their requirements, but for most applications the default usage is recommended.

    In this article we will cover how iOS developers can port their camera-related code to a Windows 8 Store app. We will discuss how to invoke the default camera UI control, process the result using async programming patterns, and use the Image control to display the picture in a Windows Store app UI.

    We will also cover how to encode and decode the picture to/from base64 format, which can be very useful in line of business apps that may require transmitting the image over the network as JSON data for RESTful backend services.

    Camera Controls and UI

    The default camera UI controls come with some standard options that users can tweak. They usually handle all user interactions with the camera UI, including the touch gestures, finally returning the result (captured picture or video) to the code invoking it (via delegates or asynchronous callbacks).

    Depending on the API settings, the camera controls let users edit the picture (e.g., crop) and customize as needed. For advanced editing and customization, we may have to develop our own camera UI control as the default control capabilities are limited.

    These controls also provide a way for users to play with different device camera capabilities like flash, brightness, and other standard features depending on the platform.

    The code required to use these controls is usually simple and easy to integrate as long as you stick to the platform recommended API usage patterns.

    A Healthcare Windows Store App

    Like we did in several other articles in this forum, we will build the case study around a healthcare Windows Store application. We will extend it with the capability to take a picture and update the patient’s profile.

    Some of the previous articles include:

    The application allows the user to login to the system, view the list of patients (Figure 1), and access patient medical records, such as the profiles, doctor’s notes, lab test results, vital graphs, etc.



    Figure 1: The “Patients” page of the Healthcare Line of Business app provides a list of all patients. Selecting an individual patient provides access to the patient’s medical records.

    In Figure 1 all the profile pictures are shown as generic avatars. With the camera feature enabled, we will be able to update those images to the most recent profile picture of each patient as needed.

    Migrating Camera related code from iOS to Windows 8

    The iOS platform provides you with both a default camera UI control option and a more advanced custom solution that has greater flexibility. UIImagePickerController API gives us the simple, default, common camera usage for taking pictures and the AV foundation framework has the fully customizable solution. This document has more details:

    http://developer.apple.com/library/ios/documentation/AudioVideo/Conceptual/CameraAndPhotoLib_TopicsForIOS/Introduction/Introduction.html

    iOS developers looking to port camera-related features in their apps to Windows 8 Store apps can use the default camera UI and control, CameraCaptureUI, which is part of Windows.Media.Capture namespace.

    http://msdn.microsoft.com/EN-US/library/windows/apps/windows.media.capture.cameracaptureui.aspx

    Using the Windows 8 camera UI and some tips on delegating the code

    In this article we assume basic familiarity with Windows Store app development. Similar to other modern OSs (e.g., iOS), Windows 8 Store apps must declare camera capability in the app manifest. Double clicking on the Package.appxmanifest in your Visual Studio* 2012 project should bring up a manifest UI that you can use to tweak the settings. Figure 2 shows the webcam capability enabled in the manifest, which lets us use the camera feature.



    Figure 2: App manifest showing the webcam package capability (captured from Visual Studio* 2012)

    Our project should now be ready to access the camera feature.

    When the app accesses the camera for the first time, Windows 8 brings up a user permission dialog allowing the user to either block or enable camera access. Figure 3 shows an example.





    Figure 3: Camera permission dialog (capture from Windows* 8)

    As discussed previously, Windows Store apps can use the system-provided default camera UI control (CameraCaptureUI class under Windows.Media.Capture namespace) for most of the use cases. This has an additional advantage in that users will already be familiar with the UI controls, gestures, and usage.

    The app usually invokes the camera control in response to some kind of user action—a button click or a touch gesture.

    Figure 4 shows the app bar with a camera button labeled “Photo.



    Figure 4: Camera button and icon (captured from Windows* 8)

    We can use the Windows standard icon (PhotoAppBarButtonStyle) for styling our camera button. Figure 5 shows the sample XAML code for a camera button as part of the app bar.

    <Button x:Name="button_take_photo" Grid.Column="1" HorizontalAlignment="Left" Margin="0,0,0,0"
    Style="{StaticResource PhotoAppBarButtonStyle}" Height="87" VerticalAlignment="Top" 
    DataContext="{Binding Patient}" Command="{Binding PatientPicCmd}" RenderTransformOrigin="0.5,0.5" 
    Visibility="Visible"/>
    

    Figure 5: XAML code for camera button and default icon as part of app bar ++

    In Windows 8 the button click input events from all sources (touch, mouse, etc.) are automatically routed to the same input handler, in this case the Command property (PatientPicCmd binding). The PatientPicCmd command is covered later in this article.

    It is usually recommended to design your code in an MVVM pattern for XAML-based apps. Please refer to the article below for more details on MVVM:

    http://msdn.microsoft.com/en-us/magazine/dd419663.aspx

    This allows us to implement our button event handler logic inside the patient view model and use the same view model and logic in any UI page of the app by simply binding to properties. Figure 6 shows another UI page where we have another camera button. No need for re-implementing the button click handler in XAML code behind, as we simply bind to the same patient view model class (which has our camera button click handler logic).



    Figure 6: Another UI page with camera button (captured from Windows* 8)

    Invoking the camera dialog is straightforward in Windows 8. We just have to create an instance of CameraCaptureUI class, configure the instance with any custom settings, and then show the camera UI dialog with CaptureFileAsync method of the instance. Since this method is implemented as an async pattern, we will need to use the await keyword to invoke it.

    For more details on asynchronous programming in Windows 8 Store apps, please refer to the following article.

    http://msdn.microsoft.com/EN-US/library/vstudio/hh191443.aspx

    Figure 7 shows sample code for invoking the default camera UI in Windows 8.

    private async void DoPatientPicCmd(object sender)
            {
                try
                {
                    CameraCaptureUI dialog = new CameraCaptureUI();
                   
                    dialog.PhotoSettings.CroppedAspectRatio = new Size(4, 3);
    
                    // fix the photo size to 300x300
                    //dialog.PhotoSettings.CroppedSizeInPixels = new Size(300, 300);
    
                    StorageFile photo = await dialog.CaptureFileAsync(CameraCaptureUIMode.Photo);
                    if (photo == null) return;
                  
                    // custom process the photo
                }
                catch (Exception ex)
                {
    // custom exception handling code
    
                }
            }
    

    Figure 7: Sample code for invoking camera UI dialog ++

    The CameraCaptureUI class has two main properties: PhotoSettings and VideoSettings. We can invoke the Camera Dialog in Photo mode only by specifying “CameraCaptureUIMode.Photo” when invoking the dialog.

    PhotoSettings is an instance of the CameraCaptureUIPhotoCaptureSettings class and can be used to specify options such as cropping, aspect ratio, max resolution, and format. Please refer to the following article for more details:

    http://msdn.microsoft.com/EN-US/library/windows/apps/windows.media.capture.cameracaptureuiphotocapturesettings.aspx

    In our sample code we enabled cropping with an aspect ratio 4:3. Figure 8 shows the default Camera UI dialog. It has the video option disabled since we invoked the control with photo mode. Other options are change camera (to switch to back, front, or other cameras), camera options (e.g., resolution), and timer functionality.



    Figure 8: Default camera UI dialog (captured from Windows* 8)

    Users can either single tap or click anywhere on the screen to take a photo. Depending on the configuration, the app brings up another dialog allowing users to crop or edit the picture. The edit options are basic and limited. For advanced editing, a custom camera UI control is recommended (or you can add a process for editing later).



    Figure 9: Camera UI control allows basic editing (captured from Windows* 8)

    After users are satisfied with their photos, they can click ok to accept, or retake as needed. After a successful action (photo is captured and OK is selected), the camera UI dialog returns the photo captured as an image file (instance of “StorageFile” class in Windows.Storage namespace), which can then be processed further or persisted to data model.

    Converting the photo image file into an image buffer

    To transmit photo images as JSON to a backend server (typical use case in enterprise apps), we want it to be in base64 encoded string format. To enable this conversion we first need to convert the photo file into an image buffer, as the encoding/decoding APIs expect a buffer.

    It is strongly recommended to use an async pattern where possible, as all of this processing is happening inside the photo button click handler.

    We access the photo file as a random access stream and copy it into a buffer using DataReader interface. Sample code is shown in Figure 10.

                    StorageFile photo = await dialog.CaptureFileAsync(CameraCaptureUIMode.Photo);
                    if (photo == null) return;
                  
                    byte[] photoBuf = null;
    
                    using (IRandomAccessStream photostream = await photo.OpenReadAsync())
                    {
                        photoBuf = new byte[photostream.Size];
                        using (DataReader dr = new DataReader(await photo.OpenReadAsync()))
                        {
                            await dr.LoadAsync((uint)photostream.Size);
                            dr.ReadBytes(photoBuf);
                        }
                    } 
    

    Figure 10: Sample code to convert the photo file into a buffer ++

    Encoding and decoding image buffers into/from base64 encoded strings

    We can use the Convert class in System namespace to encode and decode base64 format strings. In the previous section we discussed how to convert the photo image into a byte buffer, which we can use in base64 conversion APIs. The ToBase64String method is documented here:

    http://msdn.microsoft.com/EN-US/library/dhx0d524.aspx

    The code snippet below shows how to convert the photoBuf buffer we created in the previous section into a base64 encoded string.

                    Pic = Convert.ToBase64String(photoBuf, 0, photoBuf.Length);
    

    The variable “Pic” is a public member variable of the patient view model class we discussed earlier.

    To convert the string back into a buffer:

                    var photoBuf = Convert.FromBase64String(pic);
    

    Displaying the picture using Image Control and BitmapImage Binding

    We can use the XAML Image control to display our photo. Please refer to the following document for more information:

    http://msdn.microsoft.com/EN-US/library/windows/apps/windows.ui.xaml.controls.image.aspx

    The Image control is very flexible, allowing different image formats and options. For example, we can display our photo by binding to our patient view model, as shown below:

    <Image  Margin="0,40,20,0"            
                                DataContext="{Binding Patient}"
                                Source="{Binding Image}"></Image>
    

    The Source property on the Image control can automatically convert either a BitmapImage instance or even a direct source path to an image file. To dynamically update our image, it is convenient if we bind it to a BitmapImage instance. You can find more details on this class here:

    http://msdn.microsoft.com/EN-US/library/windows/apps/windows.ui.xaml.media.imaging.bitmapimage.aspx

    To reiterate, it’s strongly recommended to use async methods where possible as these properties are bound to XAML UI elements.

    BitmapImage gives us the option to directly use a URI or set our photo buffer as the input stream via the SetSourceAsync method. We use the SetSourceAsync method for generating the profile picture if available, or else a random avatar (from app assets) depending on patient gender. Please refer to the code snippet in figure 10 below.

        public class PatientsViewModel : BindableBase
        {
            public PatientsViewModel() 
            {
                this.PatientPicCmd= new DelegateCommand(DoPatientPicCmd);
            }
    
            private string pic = string.Empty;
            public string Pic 
            { 
                get { return pic; } 
                set { 
                    this.SetProperty(ref pic, value);                
                } 
            }
    
            private BitmapImage image = null;
            public BitmapImage Image
            {
                get
                {
                    if (image == null) GetImageAsync();
                    return image;
                }
            }
    
            public async Task GetImageAsync()
            {
                image = new BitmapImage();
                if (pic.Length > 1)
                {
                    var photoBuf = Convert.FromBase64String(pic);
                    using (InMemoryRandomAccessStream mrs = new InMemoryRandomAccessStream())
                    {
                        using (DataWriter dw = new DataWriter(mrs.GetOutputStreamAt(0)))
                        {
                            dw.WriteBytes(photoBuf);
                            await dw.StoreAsync();
                        }
                        await image.SetSourceAsync(mrs);
                    }
                }
                else
                {
                    Random rand = new Random();
                    String url = "ms-appx:///Assets/" + gender.ToLower() + rand.Next(1, 4).ToString() + ".png";
                    StorageFile file = await StorageFile.GetFileFromApplicationUriAsync(new Uri(url));                
                    using (IRandomAccessStream fileStream = await file.OpenAsync(Windows.Storage.FileAccessMode.Read))
                    {
                        await image.SetSourceAsync(fileStream);
                    }
                }
                OnPropertyChanged("Image");
            }
             …
             …
    

    Figure 10: sample code for displaying a photo in an Image control using BitmapImage & SetSourceAsync ++

    Pic string property stores the base64 encoded string that we converted from our camera photo file. This property can be persisted to a local database or transmitted to backend server. The “Image” property (of type BitmapImage), which we bind to our XAML image control, simply returns a BitmapImage depending on what is in the “Pic” variable. If it is empty, it returns a random avatar or else converts the “Pic” base64 string back into a byte buffer and returns a BitmapImage instance containing it.

    Figure 11 shows the updated patients screen we referred to in Figure 1.





    Figure 11: Updated patients UI page (captured from Windows* 8)

    Users can click on any patient and update his/her profile picture, and the XAML binding will automatically update all references to it in Image controls.

    Summary

    We have discussed how iOS developers can port their camera-related code to Windows 8. We covered how to invoke the default camera UI control, process the result, use async programming patterns, and use the Image control to display the picture in Windows Store apps UI. We also covered how to encode and decode the picture to/from base64 format, which can then be easily transmitted as JSON data to RESTful backend services.

    Copyright © 2013 Intel Corporation. All rights reserved.

    *Other names and brands may be claimed as the property of others.

    ++This sample source code is released under the Intel OBL Sample Source Code License (MS-LPL Compatible), Microsoft Limited Public License, and Visual Studio* 2012 License.

  • Developers
  • Apple iOS*
  • Microsoft Windows* 8
  • Windows*
  • Healthcare
  • Porting
  • User Experience and Design
  • URL
  • Intel Media SDK Tutorial - simple_7_decode - d3d - ocl_postproc

    $
    0
    0

    This tutorial sample is similar to the “simple_6_decode_vpp_postproc” sample but instead of using VPP to post process the frames, the sample efficiently integrates with Intel OpenCL* SDK to enable custom frame processing executed on Intel® HD Graphics.

    For optimal performance, stream decode and frame processing are both executed using Intel® HD Graphics, featuring efficient surface sharing via the OpenCL 1.2 Khronos DX9 Media Surface Sharing extensions.

    The sample utilizes a generic OpenCL frame processing class, capable of NV12 type surface processing, located in the tutorial "common" samples folder.

    This tutorial sample code requires the following components to be installed on the developer system:

    1. Microsoft Visual Studio 2010* or later

    2. Intel® SDK for OpenCL* Applications 2013

        - The SDK be downloaded from here: http://software.intel.com/en-us/vcsource/tools/opencl-sdk
        - More details about the SDK can also be found via the above link.

    3. Intel® HD Graphics Driver 15.31.3071 or later

        - Intel drivers can be downloaded from here: https://downloadcenter.intel.com/default.aspx
        - Supports 3rd generation Intel® Core™ Processors and upcoming next generation Intel® Core™ Processors

    Note that OpenCL processing using Intel® HD Graphics is only supported on 3rd generation Intel® Core™ Processors and upcoming next generation Intel® Core™ Processors.

    This tutorial sample is found in the tutorial samples package under the name "simple_7_decode - d3d - ocl_postproc". The code is extensively documented with inline comments detailing each step required to setup and execute the use case.

    [ Previous Tutorial ]    [ Next Tutorial ]    [ Back to Tutorial samples index ]

  • Developers
  • Students
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Windows*
  • C/C++
  • Beginner
  • Intermediate
  • OpenCL*
  • Development Tools
  • Graphics
  • Media Processing
  • URL
  • Code Sample
  • Energy-Efficient Software Guidelines

    $
    0
    0

    Download article (PDF)

    You may also be interested in the Energy-Efficient Software Checklist


    The purpose of this document is to provide energy efficient software guidelines extending the items described in the “Energy-Efficient Software Checklist” document. The guidelines are OS and architecture agnostic except where otherwise noted.

    The following guidelines focus on how to optimize applications for energy efficiency. Note that the goal is not to provide system-level optimization suggestions. That said, it is beneficial to the application developer to look at system-wide power efficiency, as running background applications might affect or interact with the target application. For instance, a background virus checker might slow down file access or impact generic performance when active. To eliminate or reduce background impact, try to disable or minimize potential system culprits when measuring application energy efficiency.

    Before performing any power optimizations, it is also suggested that you first create a baseline measurement using existing code base so that you have a good reference of comparison modifying your application for energy efficiency.

    For a more detailed overview of the topics discussed here, please refer to the Energy-Efficient Software community and the paper “Creating Energy-Efficient Software".

    Computational Efficiency

    Multithreading

    Execution can be accelerated by taking advantage of multiple threads and cores, leading to increased idle time that in turn leads to energy savings.

    Try to balance your threads, as imbalanced threads may lead to increased energy consumption. The threaded workload can be decomposed using functional decomposition or data decomposition. By threading the workload using data decomposition, multithreaded performance is less likely to be affected by future functional changes. It is preferred to let the OS handle scheduling of threads as opposed to affinitizing threads to a certain core. For additional details on multithreading and thread balancing, please refer to the following article: Maximizing Power Savings on Mobile Platforms.

    To analyze how your application performs with regard to threading, we suggest using the Intel® Threading Analysis Tools.

    If you are designing threaded components, also consider Intel® Threading Building Blocks.

    Finally, the Concurrency Improvement Center is designed to provide parallel programming tools and resources, starting with the basics, written by our Community experts, to help you improve the concurrency level of your software: http://software.intel.com/en-us/articles/concurrency-improvement-center/.

    Reduce Use of High-Resolution Periodic Timers

    A good way of reducing application energy footprint is to let it idle as often as possible. Make sure the application is optimized to use the longest timer rate possible while still fulfilling the requirements. Using timer intervals shorter than 15ms has small benefit for most applications. Always make sure to disable periodic timers in case they are not in use, letting the OS adjust the minimum timer resolution accordingly.

    Intel® Power Checker provides a quick method of assessing platform timer tick behavior during both application active and idle modes (Microsoft Windows) on mobile platforms using the Intel Core processor family or Intel® Atom™ processor: http://www.intel.com/partner/sat/pe.

    For driver or kernel applications running under Linux, there are additional timer optimization techniques available. Using “round jiffies”, http://www.lesswatts.org/projects/tickless/round_jiffies.php, non critical timers can be grouped, decreasing the number of wakeups. Using “deferrable timers,” http://www.lesswatts.org/projects/tickless/deferrable.php, non critical timers can be queued until processor wakes up from idle by non-deferrable timer.

    Loops

    Minimize the use of tight loops. To reduce the overhead associated with small loops, the performance/power relationship can be improved by loop unrolling. To achieve this goal, the instructions that are called in multiple iterations of the loop are combined into a single iteration. This approach will speed up the program if the overhead instructions of the loop impair performance significantly. Side effects may include increased register usage and expanded code size.

    Greater power savings can often be achieved with the 2nd generation Intel® Core™ processor family by exploiting Intel’s most recent Loop Stream Detection (LSD) technology than by performing loop unrolling. Refer to sections 2.1.2.3 and 3.4.2.4 in the Optimization Reference Manual, http://www.intel.com/design/processor/manuals/248966.pdf for details. For best results, investigate both approaches and compare the energy efficiency achieved with each.

    It is advisable to convert polling loops to being event driven. If polling loops are necessary, use efficient polling loops (i.e., the largest possible polling interval).

    Try to eliminate busy wait (spinning) loops, although in some cases such as when locking or synchronizing using shared memory, the best approach might still be to use a spin-wait loop. For efficient spin-wait loops, it is recommended to use the “pause” instruction.

    Performance Libraries/Extensions

    Using instruction set extensions such as SSE instructions, Intel® Advanced Vector Extensions (Intel® AVX) or more recent extensions, performance and energy efficiency can often be improved for computation intensive applications. The instruction set extensions are often based on the concept of processing multiple data using one instruction (SIMD). For additional details about Intel AVX, refer to: http://software.intel.com/en-us/avx/.

    Application energy efficiency can also be improved by utilizing libraries that are optimized for performance. Intel provides library packages addressing this specific topic, including Intel® Integrated Performance Primitives and the Intel® Math Kernel Library. This collection of libraries contains optimized implementations of common algorithms in areas such as audio, video, imaging, cryptography, speech recognition, and signal processing. For additional details refer to: http://software.intel.com/en-us/intel-ipp/.

    Media applications can benefit from greater power efficiency by taking advantage of video encode and decode hardware acceleration that is available on the 2nd generation Intel Core processor family with Intel® HD Graphics, for example. The Intel® Media Software Development Kit (Intel® Media SDK) provides developers with a standard application programming interface (API) to create high performance video solutions that run on a variety of current and future Intel® processors. Learn more about the Intel Media SDK at http://www.intel.com/software/mediasdk.

    Tools such as Intel® VTune™ Amplifier XE can provide more in-depth performance analysis of your application. Using the VTune environment, developers can drill down on specific performance bottlenecks in an application. See http://software.intel.com/en-us/intel-vtune-amplifier-xe/ for more information. The tools and packages listed above are available for both Windows* and Linux.*

    Algorithms

    Generally, it is advised to improve energy efficiency by using high performance algorithms and data structures that complete tasks faster, allowing the processor to idle. If requirements allow, an alternate approach is to investigate the suitability of a less complex (and more energy efficient) algorithm. The solution can also be augmented with the ability to “hot switch” the algorithm depending on the machine power context (refer to the Context/Power-Aware section of this document). For instance, an application might select a lower-quality video encoder/decoder when running on batteries.

    Be aware that heavily recursive algorithms can be energy inefficient, as they often add overhead by using or exercising more stack than non-recursive algorithms.

    Compiler Optimization

    Energy efficiency can often be improved by optimizing applications for speed using available compiler options such as “O2” or “O3” on Intel® compilers and GNU compilers.

    Extended optimizations can be achieved by using application profiling to provide insights such as the most common execution paths. This undertaking generally entails instrumenting the application, executing a profile run, and feeding back profiling information to the compiler. The Intel compilers provide options such as prof-gen and prof-use for profiled compilation.

    For more information on the Intel compilers, visit http://software.intel.com/en-us/intel-compilers/.

    Drivers

    Identify the kernel, drivers, and libraries used by the application and determine whether there are alternative implementations of components that are more power friendly. For instance, a more recent Linux kernel may feature scheduling optimizations that can make the application run more efficiently. Another possibility could be to update to a more recent and energy efficient Bluetooth* device driver.

    Programming language

    If possible, consider using a programming language implementation and libraries that are idle-power friendly. Some high-level run-time languages may cause more frequent wakeups compared to lower level system programming languages such as C.

    Data Efficiency

    Efficient handling of application data can often reduce the energy required to perform a given task.

    One approach used to reduce data movement is to buffer data transferred to and from typical storage devices such as hard disks and optical disks. By pre-fetching and/or buffering data, thereby avoiding frequent reads and writes, the device is left more time to idle. Examine your application to determine whether data requests can be buffered and batched into one operation. For additional details, refer to http://software.intel.com/en-us/articles/creating-energy-efficient-software-part-2/#dataefficiency.

    Another method to minimize data movement is to optimize how data is stored in memory. It is preferable to store data as close as possible to the processing entity. For instance, data efficiency will improve if an algorithm is optimized so that it uses data in cache as often as possible instead of accessing data from RAM.

    It is also beneficial to study how resources (such as memory) are shared between processor cores. One core using a shared resource may prevent other cores from descending into a lower sleep state (C-state). To resolve this issue, try to synchronize threads on different cores to work simultaneously and idle simultaneously.

    To analyze how memory is exercised in your application, it is advisable to use memory-profiling tools such as the Intel® VTune™ Performance Analyzer and Intel® Performance Tuning Utility.

    Context/Power-Aware Behavior

    Handling Sleep Transitions Seamlessly

    Applications can improve power awareness and user experience by reacting/adapting to platform sleep/hibernate/wake-up power transitions. Reacting to platform power transitions usually means that applications should handle the transitions without requiring a restart, loss of data, and change in state. In addition, for a good user experience, applications should handle the power transitions transparently with no user interaction.

    Following are some tasks that applications should consider with regard to sleep transitions:

    • Saving state/data prior to the sleep transition and restoring state/data after the wake-up transition
    • Closing all open system resource handles such as files and I/O devices prior to the sleep transition
    • Disconnecting all communication links prior to the sleep transition and re-establishing all communication links after the wake-up transition
    • Synchronizing all remote activity (such as writing back to remote files or to remote databases) after the wake-up transition
    • Stopping any ongoing user activity (such as streaming video)

    On Windows operating systems, the SetThreadExecutionState API is used to prevent the system from transitioning to sleep mode. The application should use the API with care and only prevent system idle timeouts when necessary. Remember to reset execution state when the task (such as presentation mode) is complete.

    For additional details on this topic, refer to the article, “Application Power Management for Mobility”, including some examples on how to handle system transitions on the Windows OS (using WM_POWERBROADCAST, setThreadExecutionState, etc.) and to the article, “Graceful Application Suspension”.

    Respond/Adapt to System Power Events

    For some applications, responding to transitions between battery and AC operation, including displaying battery status, can improve the user experience and depending on the measures taken, it may also improve energy efficiency.

    To avoid duplicate work in the case of transition to standby, it is advisable to handle low-battery events by saving work, state, or checkpoint. Applications also benefit by adapting to the user-selected OS power policy (scheme).

    Scale Behavior Based on Machine Power State

    Improved application energy efficiency can be achieved by scaling the application behavior in response to a change in machine power state. Following are some examples of scaled behavior:

    • Reduced resource usage when on battery power (such as disabling automatic background updates/downloads)
    • If possible, switch to a “low-power” algorithm when on battery power or running low on battery
    • Reducing the quality of video and audio playback in a DVD application to extend playtime while travelling
    • Turning off automatic spell check and grammar when on battery power

    For additional details on this topic, refer to the following article: http://software.intel.com/en-us/articles/how-to-extend-battery-life-in-mobilized-applications/.

    Depending on machine state, also explore the option of letting the application inform the user to select a lower power profile (when necessary) for more energy efficient execution.

    Intel® Power Checker provides a quick method of assessing application power efficiency during both application active and idle modes (Microsoft Windows*) on mobile platforms using the Intel® Core processor family or Intel® Atom processor: http://www.intel.com/partner/sat/pe.

    Context Awareness Toolkits

    To simplify some of the above context awareness issues, consider the use of existing context awareness toolkits such as the Intel® Laptop Gaming TDK or Intel® Web APIs.

    The Intel Laptop Gaming TDK is available for Windows 7* and Windows* 8 and provides an easy interface to help extend games by adding mobile-aware features to create a better laptop gaming experience. For TDK details and how to download, refer to http://software.intel.com/en-us/articles/intel-laptop-gaming-technology-development-kit/.

    The Intel Web APIs allows developers to extract information about the platform's configuration (e.g., display, storage, and processor), and the platform's context (e.g., bandwidth, connectivity, power, and location) within a browser using JavaScript. For API details and how to download, refer to http://software.intel.com/sites/whatif/webapis/.

    Unused Peripherals

    To further improve energy efficiency, explore the option of disabling or even turning off unused peripherals.

    For instance, in case an application exclusively uses Bluetooth on the system, the device could be disabled temporarily if there is no activity, leading to improved energy efficiency.

    Tools and Testing for Energy-Efficiency

    To analyze your application’s energy efficiency, it is recommended to profile platform power usage during application runtime. Please refer to the list of tools described in the next chapter to assist analysis.

    During profiling, try to explore the following power-related aspects of application execution: Understand the power impact of the application at Idle and Running state

    • Examine C-state behavior
    • Examine P-state behavior
    • Examine timer interrupts
    • Examine interrupt statistics
    • Examine disk and file access statistics

    Tools

    A range of tools are available that address power-related frameworks, optimizations, and measurements.

    Intel Power Checker (Microsoft Windows* 7/Windows* 8)
    Intel Power Checker can be used to quickly assess idle power efficiency (C3 state residency), platform timer tick, and power-aware behavior for applications that run on mobile platforms using the Intel Core processor family or Intel Atom processor: http://software.intel.com/en-us/software-assessment.

    >Perfmon (Microsoft Windows* 7/Windows* 7) Perfmon can be used to assist optimizations, monitoring results of tuning and configuration scenarios, and the understanding of a workload and its effect on resource usage to identify bottlenecks. http://software.intel.com/en-us/articles/use-windows-performance-monitor-for-infrastructure-health/

    PwrTest/Windows Driver Kit (Microsoft Windows* 7Windows* 8)
    The Power Management Test Tool enables developers, testers, and system integrators to exercise and record power-management information from the platform. PwrTest can be used to automate sleep and resume transitions and record processor power management and battery information from the platform over a period of time: see http://msdn.microsoft.com/en-us/library/ff565246(v=vs.85).aspx and http://www.microsoft.com/whdc/system/pnppwr/powermgmt/PM_apps.mspx.

    Windows Event Viewer/Event Log (Microsoft Windows* 7/Windows* 8)
    Windows Event Viewer/Log provides a centralized log service to report events that have taken place, such as a failure to start a component or to complete an action. For instance, the tool can be used to capture “timer tick” change events, which have an indirect effect on platform energy efficiency: http://en.wikipedia.org/wiki/Event_Viewer.

    5.1.5 Windows ETW (Microsoft Windows* 7/Windows* 8)
    Event Tracing for Windows (ETW) provides application programmers the ability to start and stop event tracing sessions, instrument an application to provide trace events, and consume trace events. You can use the events to debug an application and perform capacity and performance analysis: see http://msdn.microsoft.com/en-us/library/windows/desktop/bb968803(v=vs.85).aspx.

    PowerInformer (Microsoft Windows* 7/Windows* 8)
    PowerInformer provides relevant and condensed platform power information to the developer, including for instance battery status, C and P state residency, interrupt rate and disk/file IO rates: see http://software.intel.com/en-us/articles/intel-powerinformer/.

    PowerTOP (Linux*)
    PowerTOP helps to point out the power inefficiencies of your platform. The tool shows how well the platform is using the various hardware power-saving features and culprit software components that are preventing optimal usage. It also provides tuning suggestions on how to achieve low power consumption: see http://www.lesswatts.org/projects/powertop/.

    Battery Life Toolkit (BLTK) (Linux*)
    Battery Life Toolkit (BLTK) provides infrastructure to measure laptop battery life, by launching typical single-user workloads for power performance measurement: see http://www.lesswatts.org/projects/bltk/.

    Linux command line tools
    Disk/Device activity

    Application activity

    Others

    OPTIMIZATION NOTICE

    Intel® compilers, associated libraries and associated development tools may include or utilize options that optimize for instruction sets that are available in both Intel® and non-Intel microprocessors (for example SIMD instruction sets), but do not optimize equally for non-Intel microprocessors. In addition, certain compiler options for Intel compilers, including some that are not specific to Intel micro architecture, are reserved for Intel microprocessors. For a detailed description of Intel compiler options, including the instruction sets and specific microprocessors they implicate, please refer to the “Intel® Compiler User and Reference Guides” under “Compiler Options." Many library routines that are part of Intel® compiler products are more highly optimized for Intel microprocessors than for other microprocessors. While the compilers and libraries in Intel® compiler products offer optimizations for both Intel and Intel-compatible microprocessors, depending on the options you select, your code and other factors, you likely will get extra performance on Intel microprocessors.

    Intel® compilers associated libraries and associated development tools may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include Intel® Streaming SIMD Extensions 2 (Intel® SSE2), Intel® Streaming SIMD Extensions 3 (Intel® SSE3), and Supplemental Streaming SIMD Extensions 3 (Intel® SSSE3) instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor dependent optimizations in this product are intended for use with Intel microprocessors.

    While Intel believes our compilers and libraries are excellent choices to assist in obtaining the best performance on Intel® and non-Intel microprocessors, Intel recommends that you evaluate other compilers and libraries to determine which best meet your requirements. We hope to win your business by striving to offer the best performance of any compiler or library; please let us know if you find we do not.

    *Other names and brands may be claimed as the property of others.

  • Energy
  • optimization
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Windows*
  • Mobility
  • Parallel Computing
  • Power Efficiency
  • Windows* Store App Features and Differentiators

    $
    0
    0

    Download Article

    Windows* Store App Features and Differentiators [PDF 740KB]

    Abstract

    This article highlights the key features of Microsoft Windows* 8 operating system. It provides a summary of the differentiating features of Windows Store apps running on Intel® Core™ and Intel® Atom™ processors. One objective of this article is to serve as an introductory reference for application developers new to Windows 8.

    Overview

    Officially released in October 2012, Windows 8 revolutionizes the user interface over other versions of Windows. Some of the new compelling features introduced include:

    • The Start Screen
    • The ability to run across many different types of hardware devices, from tablets based on Intel Atom processors to Ultrabook™ systems with Intel Core processors
    • Optimized for Touch while providing full keyboard and mouse support
    • Apps from the Windows Store

    On Windows 8, Windows Store apps can incorporate the following new UI/UX features, which we will describe in more detail later in this document:

    • Live tiles
    • Charms
    • "Lock Screen” update
    • App bar
    • Snapped and fill view
    • Semantic zoom

    For application developers, these features can be implemented with the Visual Studio* 2012 and Windows Runtime APIs using familiar programming languages such as Visual C#*, JavaScript*, or Visual C++*.

    Live Tiles

    On the Windows 8 Start screen, installed applications are depicted using icons or “Tiles.” Touching a tile or clicking on it with the mouse, launches the application.

    On the Start screen, tiles can be presented in two sizes, a square (small) and a rectangle (large/wide). For applications that support both tile sizes, users can choose to display either. Tiles can either display a static image/icon or a dynamic “live” image that is updated via notifications from the application represented by the tile.

    Tiles present rich content from the apps even when they are not the active application. When a user is on the Start screen, all applications that present a Live Tile, update the content while running in the background. For examples, a weather app can display your current local temperatures, and a finance app can show the current market snapshot.



    Figure 1 Tiles in Windows* 8 Start screen.

    For application developers, a compelling live tile makes your app stand out on the Windows Start screen. Windows Runtime provides Windows.UI.Notifications APIs and a catalog of tile templates to implement tile updates. The following tutorials can help you get started:

    Charms

    On Windows 8, at anytime and anywhere, if you swipe from the right edge into the screen or mouse point on the upper right corner or lower right corner, a vertical bar will appear on the right side of the screen showing additional functions, i.e., charms. They include Search, Share, Start, Devices, Settings, and more. Charms can be customized to include extra behaviors inside a specific app beyond what is available from the Start screen.



    Figure 2 Charms appear on the right-side of the screen when the right edge swipe gesture is used.

    We will discuss the functionality of each charm below.

    Search

    With the Search charm, you can search for apps, files, and other items on your system locally. You can also search for things on the Internet using a specific app or service, such as Bing*.



    Figure 3 The Search charm.

    In Windows Store apps, you can let users search within your app by adding the Search contract. The following tutorial will get you started:

    Share

    Within a specific app, you can quickly share files, photos, web links, or other information with other people by using the Share charm. It will display a list of apps or services you can share with.



    Figure 4 The Share charm under Bing* Weather let you share the current local weather through email and other social networking apps

    TheWindows.ApplicationModel.DataTransfernamespace includes the basic classes and APIs you need to enable sharing in your Windows Store apps. The following tutorial has more information:

    Start

    The Start charm allows you to quickly go back to the Start screen from any app. If you are already in the Start screen, pressing or clicking the Start charm will go back to the last app you used.

    Devices

    The Devices charm is mainly used to set up connections with external devices, such as printers, displays, or wireless TVs.

    Settings

    You can use the Settings charm to personalize your PC and customize a specific app.



    Figure 5 The Settings charm allows you to personalize the PC

    Windows Store app developers can useWindows.UI.ApplicationSettingsclass to implement the Settings contract and to add app settings:

    In summary, we discussed the usages of charms in Windows 8 and provided links on how to implement charms in Windows Store apps.

    App Bar

    Most desktop operating systems users are familiar with the Menu bar and the Tool bar that allow access to various functions/actions supported by the application. Typically, with current desktop operating system apps, even though the Menu and Tool bars are not needed all the time while interacting with the app, the control bars permanently occupy the application’s UI real estate and can be distracting.

    In Windows 8 Store apps, to embrace the immersive and full-screen design fundamentals, the menu options and commands are no longer permanently displayed while the user interacts with the app, but are instead part of the App bar. They no longer permanently occupy the app’s valuable UI real estate. Both the app bar and charms, discussed in the previous section, are “UI on demand.” They only show up when the user requests them.

    App bars can be at the top of the screen, at the bottom of the screen, or both. The user can swipe into the screen from either the top edge or the bottom edge, or right click within the screen to display the app bars. Within a screen, the user swipes or right clicks to select items, the app bar responds by showing options and commands valid for the currently selected item(s).

    While charms expose standard functionality and operation supported by all the installed apps, application specific items can be included in the app bar based on the app’s context. They can be different across apps, across the screens within an app, and across different items selected on a screen.



    Figure 6 App bar for a Calendar app

    Windows Store app developers can use XMAL’s AppBar control to easily add an app bar. Please refer to the following tutorial for more information:

    Lock screen apps

    By default, the Windows 8 lock screen always shows some basic system information, such as date, time, network status, and battery level. When the device is in a locked state, Windows 8 also allows up to seven apps to run in the background and display Badges and Toasts on the lock screen. In addition, one of those apps is allowed to show its latest tile notification text. Users can configure which apps show their status and notifications from the Settings charm / Change PC settings /Personalize / Lock screen.



    Figure 7 A lock screen shows the Email app and the weather app's status and notifications. The weather app shows the tile notification text.

    As Windows Store app developers, you can enable your apps to show tile and badge updates on the lock screen by following the tutorial:

    Snapped and Fill Views

    For devices with a horizontal resolution of 1366 relative pixels or greater, Windows 8 allows users to interact with two applications simultaneously. Both applications run side by side on the screen, where one application occupies ¾ of the screen (Fill View), and the other occupies the remaining ¼ of the screen (Snapped View). The snapped view app can be on either the left or right side of the screen while the Fill View occupies the remainder of the screen along with the divider. The divider can be dragged to the left or the right to switch the application from Filled View to Snapped View and vice versa.

    Snapped view mode can be invoked via a “swipe and hold” gesture with a finger or mouse pointer by swiping from the top of the screen towards either the left or right side of the screen until a divider appears. The app can then be “dropped” in the smaller snapped region by releasing the mouse or by lifting the finger off of the screen.



    Figure 8 The Bing* Finance app is running in the snapped view mode, while the Bing Weather app is running in the fill view mode

    To make the Windows Store apps behave properly when running in the snapped view mode, you must adhere to the following guidelines:

    Semantic Zoom

    People who have experiences with touch-screen devices that support multi-touch are familiar with using the pinch gestures to zoom in and zoom out to enlarge or shrink an image, a web page, or a map view. Semantic Zoom extends the zoom concept in Windows Store apps by allowing zoom to operate on other data.

    To invoke the semantic zoom, the user uses the pinch gesture, or Ctrl key while scrolling the mouse scroll wheel.



    Figure 9 The zoomed-in mode (normal mode) of the Bing* Finance app.



    Figure 10 The zoomed out mode of the Bing* Finance app.

    To support semantic zoom, the application should organize and present data or information in two distinct modes: one low-level (or zoomed-in) mode that displays the normal data or information, and one high-level (or zoomed-out) mode that displays a summarized view with data or information grouped and categorized. From the zoomed-out mode, users can quickly jump to the detailed section they want to access.

    An example of semantic zoom is an app that primarily displays a list of the customers. The zoomed-out mode would show a list of states customers reside in.

    To support Semantic Zoom in the Windows Store apps, follow these guidelines:

    Development Tools

    To develop Window Store apps, application developers can use Visual Studio 2012 and the languages they may have learned and used before: Visual C#, Visual C++, HTML5/CSS, JavaScript, Visual Basic*, etc. Visual Studio 2012 is also integrated with the popular and powerful design tools such as Blend.

    Improved User Experiences on Intel® Architecture-based Devices

    On Intel® Architecture-based systems, application developers can utilize some features available on Intel® processors. They can use WiDi in Windows 8 Desktop apps to provide a premium user experience and unique usage models:

    Developers can also integrate Perceptual Computing SDK features such as gesture recognition, voice recognition, and augmented reality to create new app experiences:

    Summary

    In this article, we discussed the differentiating features in Windows 8 and Windows Store apps. We also provided links to tutorials on how to use these features.

    References and Resources

    A good starting point for developers to become familiar with the tools and features of programming Windows Store apps:

    Some links that dig deeper into the programming details of Windows Store apps:

    A summary page with multiple links on articles with details about writing multimedia Windows Store apps:

    Various Intel® Developer Zone articles about programming for Windows 8 written by the Intel team:

    In addition to the articles above, samples of Windows Store Apps are a great resource, and you can find some here:

    If you have some previous exposure to XAML, this article addresses some of the differences with programming for Windows Store apps.

    BUILD videos are another great resource that are easy to search and find what you are looking for:

    The last recommendation is to sign up for the developer forums on the Windows 8 site. If any technical questions or API problems are encountered, the question might have already been discussed/answered there, or it is easy to start a new thread to get the question answered:

    Other Reference articles and blogs:

    About the Author

    Miao Wei is a software engineer in the Intel Software and Services Group. He is currently working on scale-enabling projects for Intel Atom processors.

    Copyright © 2013 Intel Corporation. All rights reserved.

    *Other names and brands may be claimed as the property of others.

  • Live Tiles
  • Charms
  • Lock Screen
  • Windows* Store Apps
  • user experience
  • ultrabook convertibles
  • Visual Studio 2012
  • Snapped and Fill Views
  • Semantic Zoom
  • Developers
  • Microsoft Windows* 8
  • Windows*
  • Education
  • Game Development
  • Intel® Atom™ Processors
  • Intel® Core™ Processors
  • Microsoft Windows* 8 Style UI
  • Touch Interfaces
  • URL
  • Intel® VTune™ Amplifier XE 2013 Update 6 What's New

    $
    0
    0

    Intel® VTune™ Amplifier XE 2013 Update 6 release is now available for download at Intel Registration Center

    New for Update 6!

    • Caller/Callee window enabling the detailed analysis of the parent and child functions for a particular focus function
    • Optimized welcome page providing direct access to analysis configurations and recent analysis results
    • Separate configuration tabs for Binary/Symbol Search and Source Search
    • Context help for hardware events and performance metrics columns in the grid
    • Overhead and Spin time metrics in the grid and Timeline pane
    • Time scale configuration (Elapsed time, OS timestamp, and CPU timestamp options) for the Timeline pane
    • Fedora* 18 and Red Hat* Enterprise Linux* 6.4 support
    • Bug fixes and improvements in the experimental features for internal customers evaluation
      • Windows-only GPU-related experimental features now include C for Media tasks support
      • Windows Blue x64 support by all analysis types

    Details:

    • The Caller/Callee window is available in all viewpoints that provide call stack data. Use this window to analyze parent and child functions of the selected focus function and identify the most time-critical call paths. You can double-click a function of interest to go to the source view and explore the function performance by a source line. Use the Filter In by Selection grid context menu option on a function of interest to display functions included into all sub-trees that contain the selected function at any level. For more information please refer to the “Window: Caller/Callee” topic in the product help.

     

    • Improved welcome page now provides quick access to the recently used analysis configurations and analysis results.

     

    • Separate configuration tabs for Binary/Symbol Search and Source Search. Use the tabs to configure the search directories for binary/symbol and source files required to finalize collected data and work with source/assembly view. For example: if an application to analyze and the source files were moved from the location where the application was compiled then directories for separate debug files and source files should be specified in the tabs for proper symbol resolving and work with source/assembly view.

    • To get context help on a particular hardware PMU event or performance metric select What’s This Column? grid context menu option.

     

    • Overhead and Spin time metrics are provided in the grid and Timeline pane of the Hotspots by CPU Usage, Hotspots by Thread Concurrency, and Lightweight Hotspots viewpoints. The metrics will allow to identify inefficiencies in using threading runtimes (for example, Intel® Threading Building Blocks, Intel® Cilk™, OpenMP*) when a significant portion of time may be spent inside the parallel runtime wasting CPU time at high concurrency levels (overhead), or when a significant portion of CPU time is spent on spin (active) waits. For more information please refer to “Overhead and Spin time” topic in the product help.

    NOTE: VTune Amplifier ignores the Overhead and Spin time when calculating the CPU Usage metric.

    • To change the measurement units on the time scale select the Show Time Scale Ascontext menu option in Timeline, and choose from the following values:
      • Elapsed Time (default)
      • OS Timestamp
      • CPU Timestamp

    • On Fedora* 18 pango packages should be installed, including pangox-compat

     

    • Windows-only GPU-related experimental features were enriched with many improvements and bug fixes per customers’ evaluation feedback:
      • C for Media (CM) tasks support, for more details please refer to “gpu-cm”. The feature requires specific version of GFX driver which supports CM profiling, stay tuned for a separate announcement on the driver availability.
      • GEN metrics support for Valleyview
      • Better support for various HSW GT1 configurations
      • Command line report template for GPU compute tasks. Stay tuned for the coming blog on the usage details.
      • Bug fixes, including improved GPU frequency correctness and fixes for OpenCL applications
      • Top hot GPU Computing Tasks and GPU info on summary


    NOTE: the GPU analysis results are not backward compatible, i.e. Update 6 will not open results collected with Update 5

  • Developers
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Unix*
  • Business Client
  • Server
  • Windows*
  • .NET*
  • C#
  • C/C++
  • Fortran
  • Java*
  • Advanced
  • Beginner
  • Intermediate
  • Intel® Visual Fortran Composer XE
  • Intel® C++ Studio XE
  • Intel® Cluster Studio XE
  • Intel® Fortran Studio XE
  • Intel® Parallel Studio XE
  • Intel® VTune™ Amplifier XE
  • URL
  • Featured Product Support
  • Intel® Advisor XE 2013 Update 3 Readme

    $
    0
    0

    Intel® Advisor XE 2013

    Intel® Advisor XE 2013 guides developers to add parallelism to their existing C/C++, Fortran, or C# programs.

    New in Update 3!  

    ·        Improved assistance window

    ·        Snapshot copy procedure cancellation functionality

    ·        Improved suitability by excluding paused time

    ·        New educational sample for matrix multiplication

    ·        Several usability improvements

    Resources

    ·        Knowledgebase articles

    ·        Training Videos (Click on "Learn" tab to select a video)

    Contents

    File: advisor_xe_2013_update3.tar.gz

     Installer for Intel® Advisor XE 2013 Update 3 for Linux*

    File:Advisor_XE_2013_update3_setup.exe

     Installer for Intel® Advisor XE 2013 Update 3 for Windows*

    * Other names and brands may be claimed as the property of others.

    Microsoft, Windows, Visual Studio, Visual C++, and the Windows logo are trademarks, or registered trademarks of Microsoft Corporation in the United States and/or other countries.

  • Developers
  • Professors
  • Students
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Unix*
  • Business Client
  • Server
  • Windows*
  • .NET*
  • C#
  • C/C++
  • Fortran
  • Advanced
  • Beginner
  • Intermediate
  • Intel® C++ Studio XE
  • Intel® Cluster Studio XE
  • Intel® Fortran Studio XE
  • Intel® Parallel Studio XE
  • Intel® Advisor XE
  • Development Tools
  • Optimization
  • Parallel Computing
  • Threading
  • URL
  • Improving performance
  • Multithread development
  • Threading Errors
  • Featured Product Support

  • Strategies for App Communication between Windows* 8 UI and Windows 8 Desktop

    $
    0
    0

    Download article


    app-communication-ccefinal.pdf(228.17 KB)

    Abstract


    Windows 8 WinRT API allows developers to create and deploy apps quickly, and to publish those apps in the Windows Store. When an app needs access to lower-level system resources, the Windows 8 Desktop APIs are needed. To get both together, developers must create two apps, one for each environment, with some method of communication between them. For apps sold in the Windows Store, this communication cannot be done locally. As stated in the Windows Store certification requirements:

    3.1b Windows Store apps must not communicate with local desktop applications or services via local mechanisms, including via files and registry keys.

    This article discusses the main approaches for communication between Windows 8 UI apps and Windows 8 Desktop apps, including the design considerations in deciding which to use, and the basics of implementing each approach. If a network connection is not an option, the required local communication would prohibit the apps from being listed in the Windows Store. With intranet connectivity, viable options include web services and a shared cloud. Expanding to the internet allows large-scale commercial solutions for shared storage and other web services.

    Overview


    Windows 8 UI apps are intended to be sleek, reliable, fast, and touch-oriented. Window 8 UI apps are restricted in what parts of the file system, OS, and hardware they can access. Updating existing apps to this model introduces obstacles, with some app functionality being impossible to implement in the WinRT API. One solution is to create a Windows 8 UI front-end that communicates with a Window 8 Desktop app to perform the work not allowed by the WinRT API. There are a few ways to do this, limited by the Windows Store requirements.

    Businesses developing their own Windows 8 UI apps for internal use do not need to distribute apps through the Windows Store. As a result, these apps are not subject to the local sharing restriction for Windows Store apps. More info is available at http://blogs.msdn.com/b/windowsstore/archive/2012/04/25/deploying-metro-style-apps-to-businesses.aspx.

    Considerations


    Direction of Communication

    If your lightweight app is simply serving up data for user consumption, communication will be directed primarily (if not entirely) from the Desktop app to the Windows 8 UI app. It’s a rare case for the situation to be reversed, but not entirely unthinkable. If the front end is a more interactive user interface, it will need to communicate back and forth with the back end.

    App Switching or Background Communication

    If the communication required is discrete, like saving a file in a lightweight editor and switching to a fully-featured editor, the communication method can be more static. This affords more options in terms of implementation. Continuous communication brings more restrictions. Among the biggest is that this communication must originate from the Windows 8 UI app running as the front end. If the roles are reversed, with the Desktop app running in the foreground, its Windows 8 UI counterpart is likely suspended. A suspended app is essentially frozen in state, unable to communicate or process information.

    Connectivity

    Both apps being on the same machine limits the options drastically because many inter-process communications available in previous systems are not available to connect WinRT with Win32-based applications. Increasing the scope to an intranet adds a few more options. A local cloud or server could store the files, notifications can be pushed using the Windows Push Notification Service, or web services can be used. If the machines have access to the internet, large-scale cloud storage becomes viable, as does using external web services.

    Standalone

    For a Windows 8 UI app to be deployed publically, it must first be uploaded to the Windows Store, which requires that it pass certification. Windows 8 Desktop apps can also be listed in the store, but must be standalone (not requiring another piece of software be installed). Windows 8 UI apps in the store can depend on other software, but only if that software is also listed in the store. While Windows Store apps can make use of other programs (such as a server providing content) this can be a large obstacle to developing apps that work together. The Desktop app needs to have at least solid basic functionality, with the Windows 8 UI app serving as a companion program (adding more functionality) or an enhancement (improving the functionality that already exists). If the Desktop app is the back end, supplying the entire content of the Windows 8 UI app, it needs its own front end to serve its purpose independently.

    Viable Approaches


    Web Service

    A Windows 8 Desktop app can be running as a backend with web service exposure, allowing the Windows 8 UI app to connect and communicate. The Desktop app can be also used as a mediator, handling the interactions and receiving requests over the connection. You can see an example of how to use web services from the Windows 8 side at http://www.codeproject.com/Tips/482925/Accessing-Data-through-Web-Service-in-Windows-8-St.

    Local Files

    When the two apps are on the same machine and are attempting to communicate without using the network, the limited options make things difficult but not impossible. As long as the deployment is internal and does not need to use the Windows Store, the apps could communicate via local files they can both access. If the Desktop UI app avoids some write-lock pitfalls, it can modify the shared files to enable communication in almost real time by writing and reading the files quickly and often. The Intel Energy Checker SDK uses a similar model for instrumentation (more information available in References). Since named pipes, shared memory and other standard inter-process communication methods are disabled in Windows 8, using local shared files is the main remaining option for this approach. The danger involved is that of user exposure; since the files are in a shared folder, users can access, view, and modify the files. This provides some security issues if the files are unencrypted and functionality issues if the user decides to lock, change or delete the files. This approach is detailed at http://stackoverflow.com/questions/7465517/how-can-a-metro-app-in-windows-8-communicate-with-a-backend-desktop-app-on-the-s.

    Cloud - Storage and Notifications

    If network access is available, both of the apps can share files on a remote server or data cloud. Most cloud services have safeguards in place to prevent file access collisions and data loss. Setting up a Windows Push Notification Server (WNS) allows the Desktop app to send messages and updates as notifications to the Windows 8 UI app. The notifications can be “toast” style, live tile updates, or handled by code for customized communication. You can see an example of how to use toast notifications at http://code.msdn.microsoft.com/windowsapps/Toast-notifications-sample-52eeba29 and information on a cloud backend at http://www.windowsazure.com/en-us/develop/mobile/tutorials/get-started.

    Simulated Style

    If the above options cannot be used (e.g. a one-machine setup with sensitive communication), one solution is to drop the Windows 8 UI app entirely, instead using only a launching/shortcut tile to a fullscreen Desktop designed in the sleek Windows 8 UI style. While not gaining access to all the Windows 8 features, this allows full development without WinRT restriction. You can find information and examples of this type of app at http://stackoverflow.com/questions/12881521/desktop-application-in-metro-style.

    Dead Ends


    Local Web Connection Loopback

    A web connection loopback can be used by applications on other operating systems to connect via a web socket to another program on the same machine. There are problems with this approach on Windows 8. Windows 8 has security features in place to disable a loopback. This security can be disabled manually, but it will only work in debug mode. Aside from making it prohibitive for commercial deployment, the Windows 8 UI app would never pass certification for listing in the Windows Store.

    Win32 Communication

    Windows 8 permits Windows 8 Desktop APIs to be packaged in a managed DLL for use by Windows 8 UI apps. Although this sounds promising in principle, it brings a host of new problems to the table. The DLL would need to be packaged with the app, bloating the size. If an app needs some Win32 functionality that does not violate the Windows Store certification policy, a custom library could be the right solution. While adding APIs to WinRT may be a good idea in specific cases, it’s not viable as a general strategy for inter-app communication.

    Summary of Viable Approaches

    ApproachRequires NetworkExposed to UserTwo-Way CommunicationOther restrictions
    Local FilesNoYesNot entirely, Windows 8 UI app must be the front endMust be side-loaded, cannot use Windows
    Cloud / Windows Notification Server (WNS)Intranet+ServerNoYes for cloud, but WNS is directional toward Windows 8 UI onlyMany cloud services require internet access or in-network server host
    Web ServicesIntranet+ServerNoYesMany web service options require internet access or in-network server host
    Style SimulationNoNoN/ANo live tiles, can’t use charms from Windows 8 Desktop, possible user confusion

    Conclusion


    There are many ways to get the benefits of both the Windows 8 UI and Desktop applications. As with any software solution, there isn’t a universally best option. Various restrictions limit the selection, but there are benefits to each method. When Internet access is readily available, cloud and web services are the most widely applicable approach. The information presented here is a starting point to help you make the right decision for your apps, and more options may become available in the future.

    Other References


    Windows 8 App Certification Requirements http://msdn.microsoft.com/en-us/library/windows/apps/hh694083.aspx
    Windows 8 Desktop App Certification Requirements http://msdn.microsoft.com/en-us/library/windows/desktop/hh749939.aspx
    Windows Server App Certification Requirements http://msdn.microsoft.com/en-us/library/windows/desktop/hh848078(v=vs.85).aspx

    About the Author


    Brad Hill is a Software Engineer at Intel in the Developer Relations Division. Brad investigates new technologies on Intel hardware and shares the best methods with software developers via the Intel Developer Zone and at developer conferences. He is currently pursuing a Master of Science degree in Computer Science at Arizona State University.

  • ultrabook
  • Windows* Store
  • application
  • Apps
  • app
  • Developers
  • Microsoft Windows* 8
  • Windows*
  • Microsoft Windows* 8 Desktop
  • Microsoft Windows* 8 Style UI
  • URL
  • Code Sample
  • Porting Game User Interfaces to Windows 8 Touch Devices

    $
    0
    0

    Download Article


    Porting Game User Interfaces to Windows 8 Touch Devices.pdf [1.53 MB]

    © Copyright 2013 RIVER

    Purpose of the document


    The purpose of this document is to guide Windows game developers in their transition to Windows 8 on touch-enabled devices, with the sole focus on the user interface (UI). With this approach, the document is reminding the reader about a few fundamentals about desktop user interfaces before comparing desktop and touch user interfaces.

    A list of Windows 8 specific guidelines are also provided.

    1. Basics


    1.1 Basic reminders about point-and-click interfaces

    The point-and-click paradigm for user interfaces is based on the simple fact that the user can just point at their wished target with a pointing device (a mouse, for instance) and interact with it. Available interactions are then primary (left-click) or contextual (right-click). This paradigm can be visualized, for instance, in the design of the mouse pointer (the index-pointing hand). Historically, the transition between mainly keyboard-controlled UI to mouse-controlled created the need for graphical elements that let the users have control in more direct and intuitive way. That’s where the desktop metaphor and the use of icons, mouse pointers and buttons come from. They all have a correlation to the real world that makes them intuitive to use for users.

    1.2 Theory about touch

    Likewise, the transition from mouse to touch, is creating the need for a new paradigm of UI, that let the users have control in their fingers. Being able to interact with the content directly with your fingers makes you psychologically closer to the content. Therefore the expectations from touch users in terms of interactions are different, more tangible, than from the prosthetics that are mouse and mouse pointers. Consequently, the main trivial point for this transition is reducing the distance between the user and the content of the UI.

    Your main goal as a developer for touch interface should be to grant an effortless and direct manipulation of the content. Therefore, porting a game from desktop to touch requires the adoption of a solid strategy regarding content, controls and interaction with the UI.

    2. Desktop and touch UI


    1.1 Layout and navigation

    When it comes to layout and navigation, games can involve complex nested and tiered menus, containing the myriads of options that the game offers, as well as being dramatically simple and not requiring any menu or option. The fluid UI of Windows 81 is offering designers and developers the possibility of creating a seamless gaming experience.

    In designing the UI, designers should choose whether to adopt a hierarchical or flat approach. The former is an advantage for all those games that bring the gaming experience over the actual gameplay; an example could be FPS2 games, in which, besides the actual gameplay, players spend a reasonable amount of time in configuring and preparing their characters. The latter offers a smoother solution for those games that are primarily experience.

    Hierarchical Navigation

    Most Windows Store games in Windows 8 will use a hierarchical system of navigation. This pattern is common and will be familiar to people, but is made even better by the Hub navigation pattern. The essence of this pattern is the differentiation of the content in three styles and different layers of detail.

    a) Hub pages are the user’s entry point to the app. Content is displayed in order to provide a general overview to the users. Different categories are highlighted, representing each App Section with its content or functionality. The App Hub can show top stories, breaking news, content recommended for the user, and featured elements for all the different categories in one easily pannable surface. Each category group can bubble engaging content up to the hub. Tapping a group’s header enables the user to drill in to a particular section and see more content.

    b) Section pages are the second level of an app. Designers should here represent more in detail the content of each section previously presented. Each of the elements on this page will have a dedicated Detail page.

    c) Detail pages are the third level of an app. The designer will here showcase specific functionalities and details of the app. The layout varies according to the app, depending on the amount of elements to be shown.

    This pattern is often seen in games, browsers, or document creation apps, where the user moves between pages, tabs, or modes that are on the same hierarchical level. This pattern performs better when the app/game has a limited amount of pages to navigate through.

    a) The Top App Bar is great for switching between multiple views. Examples include tabs, documents, and messaging or game sessions. This bar, explained also in chapter 2.2, can be triggered by users by swiping down from the top edge of the device.

    b) Unlike the hierarchical system, the flat navigation does not offer any backwards navigation button. However, user can navigation by using the Top App Bar or by directly swiping the screen horizontally.Additional content or interactions within the app bar can be achieved by the use of a plus button, as extensively explained in Microsoft’s documentation3.

    2.2 Commands and Actions

    Designing for Windows 8 touch interfaces requires to focus on simplification and reduction of the information clutter. To reach this goal, designers should focus on what the main user task of each screen is and move the secondary actions and elements outside the chrome. Windows 8 provides several surface you can place commands and controls on. Some within the canvas, other outside the chrome (App Bar, Charms).

    Use of the Canvas

    Users should be able to complete the core scenarios just by using the canvas, not the chrome. Whenever possible, let users directly manipulate the content on the app’s canvas, rather than adding commands that act on the content. For example, in Zynga’s CityVille, one of the main user action is to shop. Then Zynga decided to allow players to shop items directly from the game canvas and not in a separate shop section.

    Use of the Charms

    The charm and app contracts are simple and powerful ways to enable common app commands.

    It’s important to avoid duplicating app contract functionality (e.g. share) on the app’s canvas or in the App Bar.

    • Search: Let users quickly search through the app’s content from anywhere in the system, including other apps. And vice versa.
    • Share: Let users share content (like game achievements for instance) from the app with other people or apps, and receive shared content.
    • Devices: Let users enjoy audio, video, or images streamed from the app to other devices in their home network.
    • Settings: Consolidate all of the app settings under one roof and let users configure the app with common mechanism they’re already be familiar with.

    Use of the App Bar

    The App Bar is used to display on-demand commands to users, relevant to the current screen. As an established element of Windows 8 desktop experience, also when designing touch games the App Bar can be helpful for menus and contextual actions. As an example, RPG can benefit of the App Bar to collect all the user possible actions related to the current screen.

    Use of the Context Menus

    Context menus for clipboard actions (like cut, copy, and paste), or for commands that apply to content that cannot be selected (like an image on a web page). The system provides apps with default context menus for text and hyperlinks. For text, the default context menu shows the clipboard commands. For hyperlinks, the default menu shows commands to copy and to open the link.

    Target Size

    When porting your app or game, it is suggested to differentiate target sizes that are touch-enabled from those that are touch-optimized. From the Windows 8 guidelines, a touch-enabled target is minimum 23px by 23px, corresponding to a physical size of 5mm, while touch-optimized targets should be preferably 40px by 40px, corresponding to a physical size of 10mm. An overall consideration could be that big targets are usable with mouses, while small targets are arguably unusable with touch.

    Spacing

    Together with a correct size strategy, a correct spacing should be adopted. From Windows 8 guidelines, a minimum of 2mm between targets would allow a correct use of the UI. This choice is also important for input fields, radio buttons and multiple selection in general.

    a) Buttons. The overall definition of what a button is and how it can be interacted with is very different on desktop and on touch. On a desktop interface, buttons would usually have a hover state and a selected state. Hover states such as visual changes, sound or tooltips would provide additional information to the user on the purpose of the button and its consequences. Touch-activated buttons don’t have hover states; therefore, designers should define a solution to provide feedbacks to the users throughout the application/game.

    Some examples could be sound hints, haptic feedbacks or subtle animations. As a good design practice, designers should let users disable those feedbacks easily.

    As stated in the Windows 8 guidelines, the UI should reduce the use of buttons and enable gestures for users to interact with the content of the app/game. Chapter 3.4 will define to topic of gestures more in detail.

    b) Dropdowns. Typically on desktop, dropdowns would preferably replace any stepper/toggle with more than 3 options in most places. This kind of dropdowns works best with keyboard/mouse input, alternate solutions should be used for touch/controller. In Windows 8, a particularly usage of dropdowns that goes over the traditional one regards headers.

    Often header dropdown menus enables users to jump laterally among categories. For example, consider a user who is reading a sports article and wants to go to the entertainment section in your news app. The user can do that easily by using the drop-down header.

    c) Sliders. In most common applications, sliders are used often for other large or “imprecise” values, such as volume/brightness/sensitivity. In terms of usability, sliders work better on touch then they do using a keyboard or controller, due to the mouse movements they imply. In games, sliders are often adopted for character setup, or feature change. Despite their high user-engagement value, it is always recommended to provide numerical feedbacks on the changed status.

    As an example, Skyrim propose sliders for character creation, but no feedbacks on progress or selected value is given; it becomes hectic in this scenario to keep track of setups and easily repeat them throughout time.

    Windows 8 is not giving specific guidelines on the matter, besides correct touch-area sizing.

    d) Scrollbars. Scrollbars in Windows 8 are very often avoided and not considered as meaningful feedback for the user. In fact, since touch devices offer the possibility of directly scrolling with fingers a direct feedback “on-the-tip” is offered. Windows UX guidelines are suggesting, instead, to provide animated interactions (e.g. elastic scroll), in order to provide feedback to the users. The same logic can be applied to all the Windows 8 elements with dynamic content, such as Tiles.

    e) Tab Bars. Tab Bars can be radically different on different platforms. Tabs are essentially just buttons when using mouse/keyboard navigation, but when using touch the tabs could be changed with swipe gestures. Apps in Windows 8 tend to be very wide so the user scrolls to the right to see different pages, which is fulfilling the same role than tabs on the desktop. There is however a need for a quick link to a specific section, which can be done with a dropdown as mentioned in Chapter 2.2.

    f) Lists. Lists are one of the most common UI element to display large amount of content and it’s normally the one that needs the most work/polish. Apart from mouse hovers, lists could be mostly the same across all input methods. One thing to keep in mind is that the touch way of scrolling is reversed from the mouse/keyboard scrolling.

    g) Text. Despite it is understandable that each game will have its own predefined font, it is anyway advisable to designers to keep in mind Windows 8 guidelines.

    The three main fonts adopted by Windows 8 are:

    • Segoe UI (the primary Windows typeface) for UI elements such as buttons and date pickers. Segoe UI supports Latin, Cyrillic, Greek, Arabic, Hebrew, Armenian, and Georgian alphabets.
    • Calibri for text that the user both reads and writes such as email and chat. Calibri supports Latin, Greek and Cyrillic alphabets.
    • Cambria for larger blocks of text such as for a magazine or RSS reader. Cambria supports Latin, Greek and Cyrillic alphabets.

    When choosing a font, tracking (global letter-spacing) in the UI is important to the overall readability of the text, particularly when it appears against dark or complex backgrounds. Windows 8 guidelines recommend to use proportional unit for tracking (i.e. em); it has to be equal to the type size in points. For example, the width of em for an 11pt type is 11 points.

    h) Transient UI (tooltips, flyouts, context menus and message dialogs) Common on desktop, very often triggered by a right click, context menus are possible on Windows 8 but not recommended. The first choice of placement for the contextual menus should be the app bar, but in some cases, tap-and-hold context menus will be necessary like for text selection.

    Because of the absence of hover trigger on touch devices, tooltips should generally be avoided. Implementing this behaviour is not impossible, but the high probability of unintentional tap can lead to a very frustrating experience for the user. Popups in Windows 8 can be either message dialogs or flyouts. Message dialogs should be used only to display urgent information disrupting the experience such as errors or questions.

    2.3 Orientation and Views

    Postures

    When porting and designing games for touch, it is important to consider the touch areas and the way users interact with them. As an assumption and result of several studies, 80-90% of the population is right handed. It is interesting to observe the Intel study on the use of the Ultrabook4.

    The two most common postures adopted are similar to tablets when holding it with two hands or using just the index to touch. In both cases, the mapping shows how perimetral call to actions are the easiest to correctly reach by the users. When comes to tablets, there two main scenarios to keep in mind. The first one, where both hands are used to interact. This is typical while typing and playing.

    A second one, with the device handheld and the use of one finger to interact. This is more typical when navigating menus.

    In both cases a designer should locate the most important and crucial features where the user can access with ease. In case of a constant use with two hands, main actions should be located in the lower part of the screen, towards the sides. In case of a more frequent use with one hand, the main actions could be placed also in the top part of the screen, more centred. When configuring controls for games, good design practice recommends customizable controls for users.

    2.4 Feedback and notifications

    Feedback

    Giving the player proper feedback is important, but more or less impossible with touch screen controls. Vibration could be one way to signal to the user that their input was registered but not all devices support vibrations. Sound is another way to indicate a success or failure, but because the target devices is often portable the user might be in a public area where sound might be hard to hear or people around might be disturbed so the audio could be muted. Visual cues work but need to happen around the users fingers/hand to be seen.

    3. Checklist for Testing


    The purpose for this section is to give developers and designer a straightforward and practical checklist to be considered when reviewing the ported application. The checklist offers a list of heuristics categorized per topic and providing a practical example.

    3.1 Accessibility

    Navigation

    Are menus broad (many items on a menu) rather than deep (many menu levels)?

    Consider menu “spread” over the screen and the implication that gestures create (e.g. tap to select one item, multiple items etc). It might be necessary to collapse menu items (see chapter 3.4 for more).

    Navigation

    If the system has multiple menu levels, is there a mechanism that allows users to go back to previous menus?

    Consider gestures (e.g. swipe) is a viable backward navigation mode.

    Navigation

    Is the content browsable by gestures?

    Semantic Zoom (technique to browse large set of data) and panning make navigation fast and fluid. Instead of putting content in multiple tabs or pages, use large canvases that support panning and Semantic Zoom.

    Layout

    Are action elements placed on the correct side of the screen?

    Most people hold a slate with their left hand and touch it with their right. In general, elements placed on the right side are easier to touch, and putting them on the right prevents occlusion of the main area of the screen.

    Besides being a generalized heuristic, it may be anyway considered when a developer is willing to spur one behavior rather than another.

    Layout

    Are interactive elements placed along the bottom corners?

    Because slates are most often held along the side, the bottom corners and sides are ideal locations for interactive elements.

    Layout

    Is content placed in the upper half of the screen?

    Content in the top half of the screen is easier to see than content in the bottom half, which is often blocked by the hands or ignored.

    Gestures

    Is the application facilitating straight line movements?

    Fingertip movements are inherently imprecise as a straight-line motion with one or more fingers is difficult due to the curvature of hand joints and the number of joints involved in the motion.

    Gestures

    Are all the main interactive elements easy to access?

    Some areas on the touch surface of a display device can be difficult to reach due to finger posture and the user's grip on the device.

    Posture

    Is the application mainly used with one hand holding, one hand interacting with light to medium interaction?

    Right or bottom edges offer quick interaction.

    Lower right corner might be occluded by hand and wrist.

    Limited reaching makes touching more accurate.

    Reading, browsing, email, and light typing.

    Posture

    Is the application mainly used with two hands holding, thumbs interacting with light to medium interaction?

    Lower left and right corners offer quick interaction.

    Anchored thumbs increase touching accuracy.

    Anything in the middle of the screen is difficult to reach.

    Touching middle of screen requires changing posture.

    Reading, browsing, light typing, gaming.

    Posture

    Is the application mainly used with the device resting on table or legs, two hands interacting with light to heavy interaction?

    Bottom of the screen offers quick interaction.

    Lower corners might be occluded by hands and wrists.

    Reduced need for reaching makes touching more accurate (e.g. when reading, browsing, email, heavy typing)

    Posture

    Is the application mainly used while the device rests on table or stand, with or without interaction?

    Bottom of screen offers quick interaction.

    Touching top of the screen occludes content.

    Touching top of screen might knock a docked device off balance.

    Interaction at a distance reduces readability and accuracy.

    Increase target size to improve readability and precision (e.g. when watching a movie, listening to music).

    Commands

    Do you have specific contextual action on one page?

    Use the app bar to display commands to users on-demand. The app bar shows commands relevant to the user's context, usually the current page, or the current selection.

    The app bar is not visible by default. It appears when a user swipes a finger from the top or bottom edge of the screen. The app bar can also appear programmatically on object selection or on right click.

    The App Bar is transient, going away after the user taps a command, taps the app canvas, or repeats the swipe gesture. If needed, you can keep the App Bar visible to ease multi-select scenarios.

    Commands

    Do you have clipboard or content actions in one page?

    You can use context menus for clipboard actions (like cut, copy, and paste), or for commands that apply to content that cannot be selected (like an image on a web page).

    Commands

    Are persistent commands placed on the right?

    Start by placing default commands on the right side of the app bar. If there are only a few commands, the app bar may end up with commands only on the right.

    For example for the Browse commands, the view command set and the filter/sort set are persistent.

    Commands

    Are edges used correctly?

    If there is a larger number of commands, separate distinct command sets on the left or the right to balance out the app bar and to make commands more ergonomically accessible.

    For example you can move the view command set to the left and keep the filter/sort set on the right. Moreover, when a set is active then the related commands appear to the right of the set.

    Commands

    Are disabled commands shown/hidden?

    Commands that are not relevant in certain circumstances should be hidden. When they do appear, they should not disrupt the ordering of persistent commands.

    For example, when map view is active the map view commands appear to the right of the view command set.

    Commands

    Is standard placement for standard commands adopted?

    Some commands are common and appear in many apps. To create consistency and instill confidence, standards should just be followed.

    Commands

    Is “new” button given the right positioning?

    If your app calls for a "New" command, where any new type of entity is created (add, create, compose), place that command against the right edge of the bar. This gives every "New" command, regardless of the specific app or context, consistent placement and makes it easily accessible with thumbs.

    Touch

    Is the application using hover states?

    Touch uses a two-state model: the touch surface of a display device is either touched (on) or not (off). There is no hover state that can trigger additional visual feedback.

    Touch

    Are tooltips used instead of hover states?

    Show tooltips when finger contact is maintained on an object. This is useful for describing object functionality (drag the fingertip off the object to avoid invoking it).

    For small objects, offset tooltips so they are not covered by the fingertip contact area. This is helpful for targeting.

    Touch

    Is the application thought for multi-touch interactions?

    Supports multi-touch: multiple input points (fingertips) on a touch surface. Manipulations should not be distinguished by the number of fingers used. Interactions, instead, should support compound manipulations. For example, pinch to zoom while dragging the fingers to pan.

    Occlusion

    Can UI elements be covered by fingers?

    Make UI elements big enough so that they cannot be completely covered by a fingertip contact area.

    Position menus and pop-ups above the contact area whenever possible

    Text/Image

    Are you facilitating precise selection?

    Where precision is required (for example, text selection), provide selection handles that are offset to improve accuracy.

    3.2 Errors and reversibility

    Feedback

    Is the system offering feedbacks to touch actions?

    Increase user confidence by providing immediate visual feedback whenever the screen is touched.

    Interactive elements should react by changing color, changing size, or by moving. Items that are not interactive should show system touch visuals only when the screen is touched.

    Reversibility

    Are all the actions reversible?

    If you pick up a book, you can put it back down where you found it. Touch interactions should behave in a similar way — they should be reversible. Provide visual feedback to indicate what will happen when the user lifts their finger. This will make your app safe to explore using touch.

    Warnings

    Is sound used to signal an error?

    Warnings

    Do error messages indicate what action the user needs to take to correct the error?

    Warnings

    Does the system prevent users from making errors whenever possible?

    Warnings

    Does the system warn users if they are about to make a potentially serious error?

    3.3 Touch Language

    Gesture

    Is the primary action accessible by tapping?

    Tapping on an element invokes its primary action, for instance launching an application or executing a command.

    Gesture

    Is slide used to pan?

    Slide is used primarily for panning interactions but can also be used for moving, drawing, or writing. Slide can also be used to target small, densely packed elements by scrubbing (sliding the finger over related objects such as radio buttons).

    Gesture

    Is swipe used to select, command, and move?

    Sliding the finger a short distance, perpendicular to the panning direction, selects objects in a list or grid (ListView and Grid Layout controls). Display the App Bar with relevant commands when objects are selected.

    Gesture

    Is zooming allowed?

    While the pinch and stretch gestures are commonly used for resizing, they also enable jumping to the beginning, end, or anywhere within the content with Semantic Zoom. A Semantic Zoom control provides a zoomed out view for showing groups of items and quick ways to dive back into them.

    Gesture

    Is rotation allowed?

    Rotating with two or more fingers causes an object to rotate. Rotate the device itself to rotate the entire screen.

    Gesture

    Is swiping from the edges show app commands?

    WINDOWS 8 App commands are revealed by swiping from the bottom or top edge of the screen. Use the App Bar to display app commands.

    Gesture

    Is swiping from the edges show system commands?

    WINDOWS 8

    Swiping from the right edge of the screen reveals the charms that expose system commands.

    Swiping from the left edge cycles through currently running apps.

    Sliding from the top edge toward the bottom edge of the screen closes the current app.

    Sliding from the top edge down and to the left or right edge snaps the current app to that side of the screen.

    General

    Are interactions untimed?

    Interactions that require compound gestures such as double tap or press and hold need to be performed within a certain amount of time. Avoid timed interactions like these because they are often triggered accidentally and are difficult to time correctly.

    UI elements

    Is the content able to follow user’s finger?

    Elements that can be moved or dragged by a user, such as a canvas or a slider, should follow the user's finger when moving. Buttons and other elements that do not move should return to their default state when the user slides or lifts their finger off the element.

    UI elements

    Is content always correctly visualized and not covered by fingers?

    Especially in case of moving targets as above, it is important to always maintain the content visible and avoid it to disappear “under” the fingers. As an example, a value on a slider, changing when moving the slider, must always be visible by the user. Another example, common in games, is drag and drop features: developers should pay attention to allowing the content to be correctly seen in all cases. The same logic may apply to dropdowns and radio buttons.

    3.4 Technicalities

    Target size

    Is touch minimum size taken into account?

    7x7 mm (40px) is a good minimum size if touching the wrong target can be corrected in one or two gestures or within five seconds. Padding between targets is just as important as target size. This logic applies, besides buttons, to sliders, dropdowns, scroll bar and all those UI elements offering control.

    Target size

    Do crucial actions have bigger target sizes?

    Close, delete, and other actions with severe consequences can’t afford accidental taps. Use 9x9 mm (50px) targets if touching the wrong target requires more than two gestures, five seconds, or a major context change to correct.

    Target size

    Extreme case

    If you find yourself cramming things to fit, it’s okay to use 5x5 mm (30px) targets as long as touching the wrong target can be corrected with one gesture. Using 2 mm of padding between targets is extremely important in this case.

    Mouse

    When a mouse is connected, is the correct UI presented to users?

    When a mouse is detected (through move or hover events), show mouse-specific UI. If the mouse doesn't move for a certain amount of time, or if the user initiates a touch interaction, make the mouse UI gradually fade away. This keeps the UI clean and uncluttered.

    Mouse

    Are interactive objects presented as such with mouse events?

    Provide visual feedback (or hover effects) for UI elements to indicate interactivity during mouseover events.

    Resolution

    Is menu content resizing with screen size?

    It can be decided that menu resizes automatically, showing more content with the increasing space; it can be decided that menu scales up proportionally and content too; in case of dense content in small resolutions, content as screen can be made scrolling.

    Resolution

    Are the “standard” case and the “worse” case kept in consideration?

    It is advisable to start from the most used use case and then verify it against the worse case scenario, especially when comes to fitting content and visualizing the correct call to actions.

    Resolution

    Is the text size correct for both the “standard” case and the “worse” case?

    In cases in which small resolution won’t offer space, it is advisable to collapse some of the text or hide it, in favour of a correct visualization and clear communication

    Scrolling

    Are selected states disabled when scrolling?

    It might be advisable to make selected states inactive when scrolling in order to avoid accidental selection.

    Orientation

    Is orientation lockable when both portrait and landscape modes are available?

    In games in which there is a massive use of gyro (e.g. driving games) users should be able to lock the orientation in order to avoid unexpected changes between landscape and portrait.

    Content

    When the on-screen keyboard is triggered, are input fields accessible?

    The screen should be repositioned in order to put focus on the selected input field. In particular, always keeping an eye on the worse case scenario (e.g. dense content) would grant better results.

    Content

    Is the app supportingsnapped and filled view?

    Remember that snapping is simply resizing your app. Snapped and fill views are only available on displays with a horizontal resolution of 1366 relative pixels or greater. Because users can snap every app, you should design your app for the snapped view state. If you don't, the system resizes your app anyway and might crop your content or add scrollbars.

    3.5 Design for all

    Content

    Is your app supporting blind or visually impaired users?

    Blind or visually impaired users rely on screen readers to help them create and maintain a mental model of your app's UI. Hearing information about the UI, including the names of UI elements, helps users understand the UI content and invoke available functionality.

    To support screen reading, your app needs to provide sufficient and correct information about its UI elements, including the name, role, description, state, position, and so on.

    Content

    Is the content of your app supporting visually impaired users?

    Visually impaired users need text to be displayed with a high contrast ratio. They also need a UI that looks good in high-contrast mode and scales properly after selecting Make everything on your screen bigger in the Ease of Access control panel. Where color is used to convey information, users with color blindness need color alternatives like text, shapes, and icons.

    Input

    Is the keyboard accessible?

    The keyboard is integral to using a screen reader, and it is also important for users who prefer the keyboard as a more efficient way to interact with an app. An accessible app lets users access all interactive UI elements by keyboard only, enabling users to:

    Navigate the app by using the Tab and arrow keys.

    Activate UI elements by using the Spacebar and Enter keys.

    Access commands and controls by using keyboard shortcuts.

    The On-Screen Keyboard is available for systems that don't include a physical keyboard, or for users whose mobility impairments prevent them from using traditional physical input devices.

    Input

    Is the on-screen keyboard showing the correct visual feedback?

    The on-screen keyboard should always give feedbacks on pressed/ selected states. Besides, if implemented, use haptic feedback in order to reach the best results.

    Input

    Is the UI offering a right and left handed switch?

    For some application supporting both right and left handed should be required. Developer should offer the possibility of switching between the two modes through a toggle or button.

    3.6 Conversion Table

    The following table offers translation standards for all the input types. This table should be followed as an example for general interactions, while specific ad-hoc gestures should be crafted according specifications.

    Interaction

    Touch

    Mouse

    Keyboard (hardware)

    Select (list or grid)

    Swipe opposite the panning direction

    Right-click

    Spacebar

    Show app bar

    Swipe from top or bottom edge

    Right-click

    Windows Logo Key+Z, menu key

    Context menu

    Tap on selected text, press and hold

    Right-click

    Menu

    Launch/activate

    Tap

    Left-click

    Enter

    Scrolling short distance

    Slide

    Scroll-bar, arrow keys, left-click and slide

    Arrow keys

    Scrolling long distance

    Slide (including inertia)

    Scroll-bar, mouse wheel, left-click and slide

    Page up, Page down

    Rearrange (drag)

    Slide opposite the scrolling direction past a distance threshold

    Left-click and slide

    Ctrl-C, Ctrl-V

    Zoom

    Pinch, stretch

    Mouse wheel, Ctrl+mouse wheel, UI command

    Ctrl+Plus(+)/Minus(-)

    Rotate

    Turn

    Ctrl+Shift+mouse wheel, UI command

    Ctrl+Plus(+)/Minus(-)

    Insert cursor/select text

    Tap, tap on gripper

    Left-click+slide, double-click

    Arrow keys, Shift+arrow keys, Ctrl+arrow keys, and so on

    Moreinformation

    Press and hold

    Hover (with time threshold)

    Move focus rectangle (with time threshold)

    Interaction feedback

    Touch visualizations

    Cursor movement, cursor changes

    Focus rectangles

    Movefocus

    N/A

    N/A

    Arrow keys, Tab

    4. Best Practices


    This chapter is aimed to provide examples of games that has been very well ported from a traditional point-and-click UI to a touch gaming experience. Despite the games presented may vary from the interested developers’ typology, their solutions should be anyway considered as cases that are covering crucial parts of the porting process. The examples presented are covering different areas such as Extended Uses Experience, UI and Navigation. The former, focusing mainly on good examples for keeping a solid and consistent experience throughout the different platforms; the latter, providing cases in which clever and relevant UI solutions have been adopted.

    4.1 Extended User Experience

    A key value to consider when porting games to touch devices, is to provide a unified and extended user experience. Depending on the game, key elements can serve as mean for unification (e.g. UI, Color Coding, Menu, Storytelling). In some cases the gameplay is the same on every platform, in other cases, the gameplay is more focused on key features. Some examples covering these different scenarios are presented. FIFA 13 is a good example of identical experience on every device. The UI is perfectly matching in all the platforms, both in aesthetics and functionality; developers have been good in adapting sizing and screen positioning for smaller screens, granting a unique feeling.

    Even the gameplay offers exactly the same features simply translated to touch gestures; the way a player is controlled on iOS very much resembles the Playstation Gamepad; X/Y movements are handled by a digital stick on the left-hand corner of the screen, and the main actions on the right-hand corner. It is interesting to observe how touch and small devices forced developers into sharpening the focus over selected “core” functionalities, rather than trying to translate and fit everything. As an example Skill move, a 2-step action on playstation (i.e.L2+Stick) has been simplified in one unique panning action (i.e. move around the special move stick while keeping it pressed). Besides, all the special variants of passing and shooting, currently available on playstation have been removed and simplified into a unique way.

    Many other points can be made in this gesture-translation analysis; however, the main point that has to be kept from this example is the attention put by EA developers into finding the core functionalities and focusing into translating only those to touch.

    The same concept of unified experience can also be analysed from another perspective. Game developer CCP, offers a unified but yet different experience in every device. On PC, the massive multiplayer EVE, offers a complex set of controls and communication tools necessary to control, master and conquer different parts of a universe (i.e. Eve Universe) by sending troops and aerial attacks.

    On playstation players are part of those troops that have been sent in the different planets of the Universe to conquer. The game hence becomes a FPS (i.e. Dust 514), but still connected and able to interact live with the players in EVE Online on a desktop device.

    The touch experience (i.e. Neocom App) has been focused only on the core of the two previous games: the navigation (i.e. Neocom Menu) from EVE, and the character setup from Dust 514. The result is a companion app, in which players can tweak and prepare their characters for the battle before, during and after the gameplay.

    In this case, the experience is spread through the platforms but it focuses on a specific functionality which each platform can serve best (i.e. PC=complexity PS3=immersive gameplay TOUCH=portability and control). In particular, the touch application is able to perfectly resemble the UI of both DUST and EVE, with a particular attention to sizing and interaction. In this case developers have been really good at adapting the deep and complex menu to a simplified and easy-to-navigate touch version.

    4.2 Gestures and Gameplay

    As already mentioned before, key for successfully porting towards touch devices, is to focus actions and simplify complex combined gestures. However, from a pure experiential point of view, there are other good examples of details that would make the transition to touch controls smoother.

    Driving games, Real Racing and Need for Speed among all, are very valid examples of how sensor (e.g. accelerometer) can be used and tuned to increase the quality of the gameplay. In particular, since tilting the device equals steering a car, designers notices how often rotating an ipad means rotating also the screens and the game visuals; it resulted in weird positions while playing and often loss of control. Real racing solved this issue by balancing the device tilting with the visuals tilting. The resulting improvement made the game highly playable compared to competitors and soon became a standard.

    When dealing with complex controls, rich in interaction, very often game developers opt for a semi-automatic version of them, giving more attention to the gameplay and the user’s involvement, rather than control’s accuracy. Some might argue that the game does not feel like the original anymore, but practice demonstrated that players using touch devices are not looking specifically for accuracy, but for involvement and immersive game experiences.

    Bastion, RPG winner of several awards, used for the iPad version semi-automated control for shooting (i.e. facilitated aiming) and used a simple tap and drag control for moving the character around the environments, with both touch and automatic panning of the scene. The result is an award winning game that clearly resembles the original, despite offering a differently engagement.

    Another interesting point that needs to be kept in consideration is the type of controls (e.g. buttons, digital sticks etc) to be presented on the touch UI. Depending on the complexity of the game, very often two main scenarios occur. The first one, typical of games with simpler or very rigid controls (e.g. driving games, old-school games, sports in general etc), will make use of a digital representation of the touchpad.

    In the example on previous page (i.e. Tony Hawk touch) a big part of the screen is covered by controls. Besides being the easiest solution when porting a game to touch, the most visible drawback is the fact the the gameplay is often covered by the players’ hands, resulting in a pretty disappointing experience. Moreover, user testing showed how digitalized buttons are really complex to use correctly, as not giving any feedback as physical buttons do. It is good for developers to keep in mind that some companies are focusing on creating physical add-on to be snapped to phones and iPads, in order to simulate physical buttons; however, this topic won’t be covered in the report, also due to the scarce spread of those devices through the market.

    The second scenario, instead, is typical of more complex games with a rich storytelling. In this case, the buttons on touch devices are a representation of a specific action, that on a gamepad might be only achievable by using multiple buttons.

    In the example presented above (i.e. Grand Theft Auto) players are given controls based on contextual actions. When in “walking” mode, players are given specific action controls that differ from when players are in “driving” mode. The advantage of this solution is the possibility of covering several types of gameplay without ending up with cluttered, hence unusable, UI. Moreover, it can be seen how the game give a hierarchical organization to the displacement of the buttons. More frequent or main actions deserve a bigger size, hence reachability, rather than minor one (e.g. accelerate or brake is bigger and easier to reach then using the horn).

    Addition to all this they have added the flexibility to enable the user to customise the UI. This lets you fully customize exactly where each of the button icons, and HUD elements such as the mini-map, appear on the screen. They allow the user to simply touch and drag each item to wherever you want it to be and double tap each item to resize it.

    5. Recommended Documentation


    Microsoft UX Guidelines
    http://msdn.microsoft.com/en-us/library/windows/apps/hh779072.aspx

    Microsoft Touch Interaction Design
    http://msdn.microsoft.com/en-us/library/windows/apps/hh465415.aspx

    Windows Store App Certification Requirements
    http://msdn.microsoft.com/en-us/library/windows/apps/hh694083.aspx

    Getting started with Windows Store apps
    http://msdn.microsoft.com/library/windows/apps/br211386

    Apple’s Desktop to Touch transition Guidelines
    http://developer.apple.com/library/ios/#DOCUMENTATION/UserExperience/Conceptual/MobileHIG/TranslateApp/TranslateApp.html

    Apple’s App Design Strategies
    http://developer.apple.com/library/ios/#DOCUMENTATION/UserExperience/Conceptual/MobileHIG/AppDesign/AppDesign.html

    Touch UI: iPad to Windows 8
    http://msdn.microsoft.com/en-us/library/windows/apps/hh868262

    MS Touch Guidelines
    http://msdn.microsoft.com/en-us/library/windows/apps/hh779072.aspx

    Android Guidelines
    http://developer.android.com/design/index.html

    Portrait vs Landscape
    http://blogs.msdn.com/b/b8/archive/2011/10/20/optimizing-for-bothlandscape-and-portrait.aspx

    Windows 8 – Designing Great Games
    http://msdn.microsoft.com/en-us/library/windows/apps/hh868271.aspx

    Baldur’s Gate: Enhanced Edition (Release for PC & iPad)
    https://itunes.apple.com/us/app/baldurs-gate-enhanced-edition/id515114051?mt=8

    Grand Theft Auto 3 & Vice City (Released for Consoles, PC and recently iPad/iPhone)
    https://itunes.apple.com/nz/app/grand-theft-auto-vice-city/id578448682?mt=8

    Mobile Interaction Design (2006), Matt Jones, Gary Marsden
    http://www.amazon.com/Mobile-Interaction-Design-Matt-Jones/dp/0470090898/ref=sr_1_1?ie=UTF8&qid=1362584655&sr=8-1&keywords=mobile+interaction+design

    Web Form Design: Filling in the Blanks (2008), Luke Wroblewski
    http://www.amazon.com/Web-Form-Design-Filling-Blanks/dp/1933820241/ref=sr_1_10?s=books&ie=UTF8&qid=1362584687&sr=1-10&keywords=mobile+interaction+design

    Swipe This!: The Guide to Great Touchscreen Game Design (2012), Scott Rogers
    http://www.amazon.com/Swipe-This-Guide-Touchscreen-Design/dp/1119966965/ref=sr_1_1?s=books&ie=UTF8&qid=1362584940&sr=1-1&keywords=game+touch+design

    Bastion’s Amir Rao - Full Keynote Speech - D.I.C.E. SUMMIT 2013
    http://www.youtube.com/watch?v=qylr_oGfmCQ


    1 Guidelines for layouts (Windows Store apps)
    2 First Person Shooter
    3Navigation design for Windows Store apps
    4Re-imagining Apps for Ultrabook™: Full Series with Luke Wroblewski

  • ultrabook
  • Windows* 8
  • Windows store
  • Apps
  • Windows desktop
  • APIs
  • touch
  • touch-enable
  • game
  • User Interface
  • Developers
  • Microsoft Windows* 8
  • Windows*
  • Game Development
  • Microsoft Windows* 8 Style UI
  • Touch Interfaces
  • User Experience and Design
  • URL
  • Delivering a Better User Experience for Windows* 8 Applications - Data Caching and Content Syncing

    $
    0
    0

    Download Article


    Delivering a Better User Experience for Windows* 8 Applications - Data Caching and Content Syncing[PDF 242KB]

    Introduction


    Windows* 8 New User Interface favors applications that quickly and persistently provide the latest information to users using “Live Tiles”. Live Tiles are regularly updated with a quick hit of the latest information, whether it’s the number of new emails received, recent postings from friends, trending news articles, stock prices, etc. If a user wants more detail, read new emails or learn more about a news story, he or she opens the Live Tile or moves back to the application if previously open. At this point, the majority of applications go back out to the cloud to pull down the full content that the Live Tile was summarizing. This is fine if you have an active network connection, but what about notebooks that have intermittent connectivity (e.g. WiFi only)? This can create a jarring user experience. First, if the user has no network connection when attempting to view the details, he or she will either get an error message that there is no internet access, or the user will see detail that doesn’t match the Live Tile. Either approach results in a poor, unexpected experience and a dissatisfied user. Architecting your application to mitigate this experience has multiple benefits:

    • Users will experience a quicker response to getting data than waiting for the application to retrieve everything from the cloud.
    • Content will be available regardless of connectivity.
    • Applications that proactively address these two scenarios will differentiate themselves on systems that have either Connected Standby or Intel® Smart Connect Technology by providing the latest information when the user returns to the PC. Both of these technologies allow applications to update content while the system is in standby.

    Resolution

    The “no internet” error message problem can be addressed by developers designing and building their applications to cache data for offline use. This allows the user to go back and see information regardless of their connectivity – previously viewed emails, yesterday’s news articles, a friend’s last picture post, etc. The stale data problem can be resolved by integrating background tasks that Windows* 8 already provides to create a direct correlation between what the Live Tile is communicating and the detailed information in the application. A side benefit is that the user will perceive quicker access to the already fetched data as compared to waiting for the information to be retrieved on demand. This article will help developers tackle both of these problems as well as provide sample code to help illustrate the solution.

    Designing an application with this in mind helps improve the user experience for systems with intermittent connectivity (e.g. WiFi only) and systems that support Connected Standby or Intel® Smart Connect Technology.

    Target Applications

    In general, these capabilities are most useful for applications that get regular updates from an internet or cloud service. Some examples of applications that would benefit are email, social media (Facebook*, Twitter*, Pintrest*, etc.), news outlets, magazine subscriptions, stock tickers, etc.

    Caching Data

    As mentioned previously, being able to access information from an application even when it is offline provides a better user experience. Microsoft has written an excellent article that highlights the four key HTML5 features that allow developers to create a good offline application experience for Windows 8 New User Interface. The four features are:

    • AppCache to store file resources locally and access them offline as URLs
    • IndexedDB to store structured data locally so you can access and query it
    • DOM Storage to store small amounts of text information locally
    • Offline events to detect if you’re connected to the networ

    You can find this detailed article on MSDN: Building Offline Experiences with HTML5 AppCache and IndexedDB.

    In addition to the HTML5 data storage techniques described above, Microsoft has provided other application data storage mechanisms for Windows 8. These mechanisms allow a user to easily manage everything from simple state information to large amounts of content. The data may also be designated as temporary, local or roaming, turning the management of persistence and synchronization between devices over to the operating system. An excellent article on these concepts is on MSDN: Accessing app data with the Windows Runtime (Windows Store apps).

    Synchronizing Application Data Periodically

    An application will typically update with relevant content as the user interacts with it, but developers should also consider all the cases in which the application should update when it is not in the foreground or even when the system is not powered on. These cases may include:

    • When a push notification is received and reflected on the Live Tile
    • After an appropriate amount of time has passed since the last synchronization
    • When internet access becomes available after a period of time

    An application can subscribe to these events to trigger synchronization. These are called background tasks and they were introduced in Windows 8 for application developers to allow for things like content updates. A background task is a separate executable that is allowed to run, even in Connected Standby or when the application is suspended, when triggers associated with it occur.

    Background tasks are designed and bundled with the core application. The triggers associated with each background task must be defined in the code and all necessary capabilities must be declared in the application manifest. A great overview that covers triggers and manifest information is “Guidelines for Background Tasks” on MSDN. (http://msdn.microsoft.com/en-us/library/windows/apps/xaml/hh977051)

    For more details on application manifest refer to “How to Declare Background Tasks in the Application Manifest” on MSDN. (http://msdn.microsoft.com/en-us/library/windows/apps/xaml/hh977049) In order to accommodate a wide variety of systems with varying capabilities and configurations, a combination of triggers may be most appropriate.

    To provide the most up to date and responsive experience, a system may either have Connected Standby or Intel® Smart Connect Technology but not both. Some systems may have neither. Each case will benefit from slightly different triggers but they can be used together to cover most cases. For example, a system in Connected Standby will maintain Internet connectivity when available so a timer trigger may be most appropriate. On the other hand, a sleeping system with Intel® Smart Connect Technology will wake periodically and should only check for new content if the Internet is available. And for any PC, triggering on internet availability along with a timer will ensure that the data is synchronized and the internet connection is leveraged within the Windows 8 framework.

    Conclusion

    The new user interface that arrived with Windows 8 is a great vehicle for application developers to deliver new experiences. These experiences can be even richer for applications that periodically refresh content from the cloud. Developers who have applications that fit this theme are encouraged to look at incorporating background tasks for more frequent synchronization and data caching into their design to improve users experience on Windows 8.

    About the Authors

    Josh Moss is a Product Marketing Engineer at Intel and works on Intel® Smart Connect Technology.
    Tom Propst is an Applications Engineer at Intel enabling enterprise software.

    References

  • ultrabook
  • Windows* 8
  • Windows store
  • Apps
  • Live Tile
  • touch
  • User Interface
  • UI
  • desktop
  • user experience
  • data caching
  • content syncing
  • Developers
  • Microsoft Windows* 8
  • Windows*
  • Cloud Computing
  • Microsoft Windows* 8 Desktop
  • User Experience and Design
  • URL
  • Detecting Slate/Clamshell Mode & Screen Orientation in Convertible PC

    $
    0
    0

    Downloads


    Download Detecting Slate/Clamshell Mode & Screen Orientation in Convertible PC [PDF 574KB]
    Download DockingDemo3.zip[37 KB]

    Executive Summary


    This project demonstrates how to detect slate vs. clamshell mode as well as simple orientation detection on Windows* 8 desktop mode. The application is a tray application in the notification area and is based on win32 and ATL. The tray application also works when the machine is running in New Windows 8 UI mode. It uses windows message and sensor API notification mechanism and doesn’t need polling. However, the app requires appropriate device drivers and it was found that many current OEM platforms don’t have the necessary drivers for slate / clamshell mode detection. Simple orientation sensor works on all the tested platforms.

    System Requirements


    System requirements for slate / clamshell mode detection are as follows :

    1. Slate / clamshell mode indicator device driver (Compatible ID PNP0C60.)
    2. Docking mode indicator device driver (Compatible ID PNP0C70.)
    3. Go to Device Manager -> Human Interface Devices -> GPIO Buttons Driver -> Details -> Compatible Ids. If you find PNP0C60, that’s the driver. Without this driver, slate mode detection doesn’t work.
    4. For classical docking mode detection, you need this driver.

    System requirements for orientation :

    1. Simple Device Orientation Sensor.
    2. Present in all tested convertible PCs.

    Application Overview


    • Compile and run the application, and it will create a tray icon. For testing purpose, customize “Notification Area Icons” so that DockingDemo.exe’s behavior is to “Show icon and notifications” in the lower right corner of the screen.
    • Move the mouse over the icon and it shows the current status.

    • Right click on the icon for further menus – About, Save Log…, and Exit. Save Log will let you save all the events to a specified file. When you save the events to the log, it clears the events in the memory.
    • Rotate and switch back and forth between the slate / clamshell mode or rotate the platform. The tray icon will pop up a balloon to notify the change.

    Slate / Clamshell Mode Detection


    OS broadcasts WM_SETTINGCHANGE message to the windows when it detects slate mode change with the string “ConvertibleSlateMode” in lParam. In case of docking mode change, it broadcasts the same message with the string “SystemDockMode.” WinProc in DockingDemo.cpp handles this message. The API to query the actual status is GetSystemMetrics. This method works when the system is running New Windows 8 UI mode.

     
    BOOL bSlateMode = (GetSystemMetrics(SM_CONVERTIBLESLATEMODE) == 0); 
    BOOL bDocked = (GetSystemMetrics(SM_SYSTEMDOCKED) != 0); 
    

    Screen Orientation Detection


    In desktop environment, OS broadcasts WM_DISPLAYCHANGE message to the windows when it detects orientation changes. lParam’s low word is the width and high word is the height of the new orientation.

    There are two problems with this approach :

    • This approach only detects landscape and portrait mode. There is no distinction between landscape vs. landscape flipped and portrait vs. portrait flipped.
    • WM_DISPLAYCHANGE simply doesn’t work when it is running in New Windows 8 UI mode.

    Fortunately, Microsoft* provides COM interfaces to directly access the various sensors and there are various white papers about how to use it. Some of the references are listed here.

    In this project, SimpleOrientationSensor class implements the infrastructure to access the orientation sensor, and OrientationEvents class is sub-classed from ISensorEvents to register the callbacks for the orientation change events. Since the Sensor APIs use callback mechanism, the user application doesn’t have to poll the events. This approach works when the system is running in New Windows 8 UI mode.

    The relationship between slate mode and rotation needs to be carefully thought out. Rotation may be enabled / disabled automatically depending on the slate / clamshell mode. To ensure the proper behavior, a combination of GetAutoRotationState API and rotation sensor is used for this sample, i.e., discard rotation event notification when autorotation is NOT enabled. In that case, use EnumDisplaySettings to get the current orientation in NotifyOrientationChange function as shown in the code snippet below.

    Intel, the Intel logo and Xeon are trademarks of Intel Corporation in the U.S. and other countries. *Other names and brands may be claimed as the property of others.
    Copyright© 2013 Intel Corporation. All rights reserved.

    License
    Intel sample sources are provided to users under the Intel Sample Source Code License Agreement.

  • ultrabook
  • Windows *8
  • desktop
  • Tablet
  • applications
  • slate mode
  • clamshell mode
  • orientation detection
  • Developers
  • Microsoft Windows* 8
  • Windows*
  • Intermediate
  • Microsoft Windows* 8 Desktop
  • Microsoft Windows* 8 Style UI
  • Sensors
  • Touch Interfaces
  • Laptop
  • Tablet
  • Desktop
  • Protected Attachments: 

    AttachmentSize
    Downloaddockingdemo3.zip36.96 KB
  • URL
  • Ballastic Game Case Study - App Innovation Contest Game Winner

    $
    0
    0

    By Edward Correia

    Downloads


    Ballastic Game Case Study[927 KB]

    Introduction


    In October 2012, Matthew Pilz, a programmer from rural Wisconsin, accepted a challenge. He had been lurking around the CodeProject website doing research when he came across a blog post by Lee Bamber, programming legend and founder of TheGameCreators.com, urging people to take part in the Intel® App Innovation Contest. Using a game idea that had stuck with him from years earlier, Pilz entered the contest, and in two months submitted his app, Ballastic. It won the gaming category's top prize.

    About the Ballastic App

    Ballastic was inspired by CANO-Lab'sPendulumania*, a Japanese freeware game that Pilz had played in the late 1990s. The object of the game was to collect sparkling targets for points. Ballastic expands on that idea by adding rewards for capturing certain targets and for avoiding hazards and challenges that increase in number along with the difficulty levels. “I thought that would be the perfect platform using the touch interface to create the vision in my head,” Pilz said.


    Figure 1.As Ballastic game levels advance, balls are added to the game piece to increase the difficulty of movement.

    In addition to participating in the competition, Pilz said the original goal of Ballastic was to make a casual game that could easily be learned and played by people at any age or skill level. "I didn't have any specific target demographics. Simplicity is my goal with every app I develop," said Pilz.

    Mission accomplished. Pilz has received feedback from both kids and adults saying that Ballastic is a great game that they have a good time playing.


    Figure 2.The Ballastic start screen.

    Since this was the first application Pilz had developed for Intel's Ultrabook™ device platform, he wanted to take advantage of as many of the platform's unique capabilities as he could and use them in unique ways. As the game progresses, the backgrounds change and the difficulty increases, similar to Lima Sky's Doodle Jump*, which also presents just a single level that gets harder and harder. Ballastic's backgrounds change in another unique way. "The light sensor capability of the Ultrabook device is one of the coolest features than I have seen on any development environment. That capability allowed me to actually create different themes and different graphics depending on the level of light in the room."

    By using the ambient light sensor in this way, Pilz was able to program the game to take on different appearances based on where it's being played. If the game is played outside in the sunlight, it will have a contrasted appearance, compared to being played in a room at night where it will have a totally different look. And for indoor play, backgrounds can vary along with screen brightness based on how much light is in the room. "I thought that would be an interesting concept," said Pilz.

    Pilz already had experience with touch, having built several apps for the Apple iPad*, iPhone*, and other handheld and mobile devices. But he was naturally excited to have access to the increased computing resources afforded by the Ultrabook device platform. "Obviously the increased hardware support and graphical capabilities of Ultrabook provide an environment for more graphically rich and CPU-intensive applications than what you can get currently on mobile devices."

    The increased processing power and feature set of the Ultrabook device allowed Pilz to add more shine and polish to Ballastic than could have been easily achieved on mobile devices. The high resolution and wide-screen display allowed him to design assets natively for 1600x900 and higher resolutions at a 16:9 aspect ratio, whereas most mobile devices require that the game be 1024x768 or lower, and often at a 4:3 aspect ratio, thus diminishing the overall style and available game space. The graphical assets and sprite effects would likewise suffer from performance and visual discrepancies if translated to a less powerful platform. Most notably, the fluid background effects and dynamic particles seen throughout the game—comprising hundreds of sprites with constantly changing properties—would not perform as well on a mobile device without requiring significant optimizations and reduction in quantity.

    Another benefit of the Ultrabook device is that its GPU supports a much higher refresh rate than the 30 to 60 frames per second that can be achieved on most other platforms. “Although a lower frame rate is certainly acceptable, being able to uncap Ballastic so that it runs at several hundred frames per second on the Ultrabook allowed me to provide exceptionally smooth and responsive controls and animations throughout the game,” said Pilz.

    The Challenges of Creating Ballastic


    Pilz faced several major challenges when developing Ballastic for the Ultrabook device, including the following :

    • A small window of time in which to develop, test, and submit the app
    • An unfamiliarity with the Ultrabook device platform
    • Adapting to differences in touch development between iOS and Windows* 8

    Developing, Testing, and Submitting the App

    Pilz started in mid-October, about one month before the November 20 deadline, so he spent about a month-and-a-half of full-time, solid work—including several sleepless nights—to get his app to where he felt comfortable releasing it. Pilz had made an early design decision to make the app touch compatible, so its buttons and controls had to be large enough to allow users to touch them without a problem. He built an on-screen keyboard for entering high scores, and everything within the game can be accessed purely by touch. Pilz also incorporated the Ultrabook device keyboard and mouse as optional inputs for gameplay and menu navigation.

    All-in-all, Pilz felt the experience of implementing touch controls on Windows 8 was similar to that of iOS, but there were a few bumps. “The biggest obstacles I found were design-oriented instead of SDK- or programming-related,” he said. One challenge was to ensure that all functionality could be interacted with easily and intuitively via touch. This meant bigger buttons and icons than what would be necessary if supporting only mouse input. The touch sensor functions were abstracted out nicely using the App Game Kit* (AGK),a set of OpenGL*-based libraries that made it a breeze to implement without having to dive into any of the raw SDK commands for such functionality.

    Although Ballastic uses only single-touch controls, Pilz said it would have been easy to support multi-touch if the game needed to. AGK includes functions such as GetRawTouchCount(), GetRawTouchCurrentX(index), GetRawTouchCurrentY(index), GetRawTouchLastX(index), and GetRawTouchLastY(index) that can interpret as many touches as the device supports.

    Pilz said the native SDK commands provided by Intel also were straightforward. Intel’s SDK uses common touch interfaces provided by Microsoft, including three different mechanisms for handling touch input: WM_TOUCH for supporting Windows 7 and 8 Desktop; WM_POINTER, which is specific to Windows 8 Desktop; and PointerPoint for developing a Windows 8 Modern UI (also known as “Metro”) apps. Ballastic was developed as a Windows 7/8 desktop application, so had it been developed using the standard interfaces it would be most closely associated with WM_TOUCH events. This aspect allows it to run on any modern Windows desktop machine, supporting a greater number of devices than the other two methods. Much like iOS, Windows applications generally store touch information in a simple array or list that can be iterated through to read and interpret the touch points with ease.

    The Ultrabook™ Device Platform

    Fortunately for Pilz, Lee Bamber and the people at TheGameCreators.com, of which Pilz has been an active member since its inception in 1999, were hard at work on the latest beta of AGK, which does most of the game programming's heavy lifting. Pilz followed Bamber’s blog from Intel’s Ultimate Coder Challenge and read that he was adding features to support the various Ultrabook device sensors in beta versions of AGK, which he subsequently released to the community. Pilz knew he had to come up with the best platform and language to develop a game quickly. Bamber's code mapped Ultrabook device functions to a few easy-to-understand functions in both Tier 1 (BASIC) and Tier 2 (C++):

    Sensor Support
    GetNFCExists()
    GetGeolocationExists()
    GetCompassExists()
    GetGyrometerExists()
    GetInclinometerExists()
    GetLightSensorExists()
    GetOrientationSensorExists()

    Notifications
    NotificationCreate()
    NotificationReset()
    GetNotification()
    GetNotificationData()
    GetNotificationType()
    SetNotificationImage()
    SetNotificationText()

    Near Field Communications (NFC)
    GetRawNFCCount()
    GetRawFirstNFCDevice()
    GetRawNextNFCDevice()
    GetRawNFCName()
    SendRawNFCData()
    GetRawNFCDataState()
    GetRawNFCData ()

    Geolocation
    GetRawGeoLatitude()
    GetRawGeoLongitude()
    GetRawGeoCity()
    GetRawGeoCountry()
    GetRawGeoPostalCode()
    GetRawGeoState()

    Compass
    GetRawCompassNorth()

    Gyrometer
    GetRawGyroVelocityX()
    GetRawGyroVelocityY()
    GetRawGyroVelocityZ()

    Inclinometer
    GetRawInclinoPitch()
    GetRawInclinoRoll()
    GetRawInclinoYaw()

    Ambient Light Sensor
    GetRawLightLevel()

    Device Orientation Sensor
    GetRawOrientationX()
    GetRawOrientationY()
    GetRawOrientationZ()
    GetRawOrientationW()

    Touch Sensor
    GetRawTouchCount()
    GetRawTouchCurrentX()
    GetRawTouchCurrentY()
    GetRawTouchLastX()
    GetRawTouchLastY()
    GetRawTouchReleased()
    GetRawTouchStartX()
    GetRawTouchStartY()
    GetRawTouchTime()
    GetRawTouchType()
    GetRawTouchValue()

    Complete documentation of the above commands will be available on AppGameKit.com once the final version of AGK 1.08 is released, as well as within the beta downloads for existing customers.

    For Pilz, this meant that hardware features of the Ultrabook device platform could be tapped using a few simple functions. Best of all, the AGK includes BASIC and native-language support. "Those looking for more power and functionality than the standalone AGK BASIC language can provide are free to create AGK applications using Visual Studio*, Xcode*, Pascal, Eclipse*, and other environments and languages," wrote Pilz on his CodeProject page. He also noted that core commands of the AGK can easily be translated between Tier-1 BASIC and the Tier-2 native languages.

    AGK saved Pilz an enormous amount of time, and he credits Bamber for the success of his project. Because the toolkit’s developers abstracted out all of the Ultrabook device sensors and made them into a high-level API, Pilz was able to create rapid prototypes. He was also able to just call the different commands and read the values from the numerous sensors, including the ambient light level, accelerometer, and touch events, without having to spend days or weeks manually coding in the C++ environment or whatever the native SDK required at that time.

    However, the first few weeks were anything but smooth. “It was nerve-racking because AGK is a closed source library and it wasn’t fully prepared for Ultrabook support when the competition began,” said Pilz. “The AGK developers consistently released new beta builds throughout the competition—sometimes several iterations in a single week—each with increasingly comprehensive support for Ultrabook. Eventually AGK had every feature of the Ultrabook available via simple function calls.” Unbelievably, the final beta came out the day before the competition ended. So Pilz waited until that point to make sure that everything was supported and working on the Ultrabook device before he submitted the app to the Intel AppUp® center. Pilz credits the guys at The Game Creators for working around the clock to make sure this application development kit was compatible with the Ultrabook device. Good thing, too, because additional challenges arose that caused the project to lose time.


    Figure 3.As game play advances and objects are captured, the ball gets bigger and heavier, making it increasingly difficult for the player to maneuver and putting ever more stress on the elastic band that holds the balls together.

    Development Contest Rules

    In addition to several snags with the Comodo certificate issuing system, Pilz hit a wall when submitting his app to the Intel AppUp center, Intel’s Ultrabook device app store. Pilz warns other developers to be sure to dot all i’s and cross all t’s in their application submissions, including the meta data, as Pilz had his application rejected at the 11th hour when he did not include the registered trademark symbol next to the “Ultrabook” product name in his game description. Pilz was not the only applicant to run into that hurdle, but thankfully Intel was quick to approve the app once this minor detail was corrected.

    Testing for Ultrabook Devices

    With the technicalities behind him, Pilz turned his focus on testing and on acquiring the proper tools in the development kit to interface with the Ultrabook device. Having an Ultrabook device himself, he was able to test the sensors that he wanted his app to use.
    The experience overall was a positive one, and Pilz said he'd be glad to do it all again and continue developing apps for the Ultrabook device. "Absolutely, yes. It was a fun and unique experience for me, even having developed for the iPad and Android* and other devices. I just think having that raw, high-end computer power and a good processor and video card [lets people] develop apps that you can't really create on most mobile devices. Even though they’re getting more powerful, they still don't compare to a dedicated Ultrabook."

    It's Not about the Money

    While Ballastic is a free app, Pilz does generate revenue from other applications he’s built with the company he founded, LinkedPIXEL. In the future, Pilz might switch to a free/premium model, offering a free limited version with an upgrade to remove ads or purchase new levels. “My game lends itself well to limitless new levels and power-up expansions, but [charging for it] hasn't crossed my mind too much."
    For developers building apps for today's touch screen mobile devices, Ultrabook devices, and particularly running Windows 8, Pilz believes it’s important to think outside of the box when developing these new-style apps. "If you’re a traditional desktop programmer, you might build an interface that works well with keyboards and mice, but we also need to consider how people might benefit from a slightly different design to make things more efficient for those who may be using alternate input methods.” Pilz also points users to resources such as CodeProject and the Intel® Developer Zone, and, of course, libraries like AGK, and feels that developers should still become familiar with Intel’s SDKs because it's good to know exactly what features and commands the Ultrabook device supports.

    Pilz also recommends developing code with a mind toward reuse, ensuring that interfaces and input controls easily adapt in the future, because of the new types of input technologies that will be available. Developing an interface that actually works well with current technology and is also adaptable will open up the doors to a much wider range of users.

    Helpful Resources


    Pilz relied heavily on the App Game Kit platforms and features for source code libraries developed specifically for the Ultrabook device, its sensors and interfaces. He was originally led to AGK and the App Innovation Contest through contacts made on CodeProject, a collaboration web site for developers. Pilz also made regular use of the Intel® Developer Zone for reference materials related to Intel SDKs.

    About Matthew Pilz


    Matthew Pilz spent his early years in rural Wisconsin, enthusiastic about computers and game development. It all started when his brother got a Commodore 64, and Pilz has been working with computers ever since. With an associate’s degree in E-commerce and web administration from Milwaukee Technical College, a Bachelor of Science in Web Technologies from Bellevue University, and various computer-related technical certificates, Pilz has spent the last decade focused on web design and application development for a variety of platforms. In spring of 2013, Pilz also won a grand prize in the Intel® Perceptual Computing Challenge by creating a prototype application, Magic Doodle Pad, using a perceptual camera and the Intel® Perceptual Computing SDK.

    Portions of this document are used with permission and copyright 2012 by CodeProject. Intel does not make any representations or warranties whatsoever regarding quality, reliability, functionality, or compatibility of third-party vendors and their devices. For optimization information, see software.Intel.com/en-us/articles/optimization-notice/. All products, dates, and plans are based on current expectations and subject to change without notice. Intel, the Intel logo, Intel AppUp, and Ultrabook are trademarks of Intel Corporation in the U.S. and/or other countries.


    *Other names and brands may be claimed as the property of others.

      Copyright © 2013. Intel Corporation. All rights reserved.

  • CodeProject
  • Ballistic
  • app game kit
  • app innovation contest
  • ultrabook
  • Windows*
  • Game Development
  • Microsoft Windows* 8 Desktop
  • User Experience and Design
  • URL
  • An Introduction to the 4th Generation Intel® Core™ Processor

    $
    0
    0

    Downloads


    Download the Introducing the 4th Generation Intel® Core™ Processor (code-named Haswell) PDF [614KB]

    Abstract


    Intel is launching the 4th generation Intel® Core™ processor, code-named Haswell. Its capabilities build on the 3rd generation Intel® Core™ processor graphics. This introductory article provides a glimpse into the 4th gen processor, with an overview of highlights like the Intel® Iris™ graphics, performance enhancements, low power options, face recognition capabilities, and more. Microsoft Windows* 8 developers will also learn about capabilities available to both Desktop and the Modern UI environments and how to take advantage of the 4th generation processor capabilities.

    Key 4th generation processor features


    The new processor builds on the processor graphics architecture first introduced in 2nd gen Intel® Core™ processors. While they were built with the 32 nm manufacturing process, both 3rd and 4th generation processors are based on the 22 nm technology. The following paragraphs describe the key differences between the 3rd and 4th gen processors.

    First ever System on Chip (SoC) for a PC

    The 4th gen Intel® Core™ processor is the first ever SoC for a PC. System on Chip, or SoC, integrates all the major building blocks for a system onto a single chip. With CPU, Graphics, Memory, and connectivity in one package, this innovative modular design provides the flexibility to package a compelling processor graphics solution for multiple form factors.

    Enhanced battery life

    The 4th gen processor provides up to 9.1 hours of HD video viewing compared to 6 hours on the 3rd gen one. The latest processor also provides 10-13 days of standby power (with refreshed email and social media notifications) compared to 4.5 days of standby power on 3rd generation processors.

    Table 1: Battery life comparison between 3rd Generation and 4th generation Intel® Core™ Processors.

    [Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. Configurations(i) For more information go to http://www.intel.com/performance.]

    Note: TDP, Thermal Design Power, represents worst-case system power.

    Intel® Iris™ Graphics

    Intel Iris Graphics allows you to play the most graphic intensive games without the need for an additional graphics card. The graphics performance on the 4th gen processor nearly doubles the performance relative to the previous generation of Intel® HD Graphics.

    Figure 1: Comparison of graphics performance of 4th gen Intel® Core™ with previous generations

    [ Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. Configurations(ii). For more information go to http://www.intel.com/performance.]

    Additionally, the Intel Iris Pro Graphics with integrated eDRAM provides close to double the performance of the 3rd generation HD graphics GPUs on Ultrabook™ devices. Both GPUs operate at higher than 28W Thermal Design Power (TDP) compared to the 17W TDP of 3rd generation graphics making them more suited for high performance operations on Desktop/AIO/Laptop form factors [Source: http://arstechnica.com/gadgets/2013/05/intels-iris-wants-to-change-how-you-feel-about-integrated-graphics/]

    Intel Iris Graphics also support Direct3D* 11.1, OpenGL* 4.1 and OpenCL* 1.2, Intel Quick Sync Video encoding engine, Intel® AVX 2.0, and DirectX Extensions.

    4th Generation Intel® Core™ processor variants

    Multiple packages of the 4th generation processor are available to cater to the needs of the growing types of systems: Workstations, Desktops, Ultrabook systems, All-In-Ones, Laptops, and Tablets. While the higher end variants targeted for Workstations and Desktops provide higher performance, they also consume higher power compared to the more power optimal mobile variants. The table 2 below provides a comparison of the variants for different form factors and usages.

    Table 2: 4th Generation Intel® Core™ Processor Variants

    The U and Y series are designed for Ultrabook devices, convertibles, and detachable form factors.

    The 4th gen processor line provides the flexibility to match power requirements with graphics performance. While the high end provides substantially better graphics performance, the lower end is suitable when lower graphics performance is required.

    For a detailed analysis of graphics capabilities, please refer to the Graphics Developers Guide.

    Intel® AVX 2.0

    Intel® Advanced Vector Extensions (Intel® AVX) 2.0 is a 256-bit instruction set extension to Intel® Streaming SIMD Extensions (Intel® SSE). Intel AVX 2.0 build on version 1.0 and provides features like Fully Pipelined Fused Multiply Add on two ports thus providing twice the floating point performance for multiply-add workloads, 256-bit integer SIMD operations compared to older 128-bit gather operations and bit manipulation instructions. These capabilities enhance usages such as face detection, pro-imaging, high performance computing, consumer video and imaging, increased vectorization, and other advanced video processing capabilities.

    More resources on Intel AVX 2.0:

    Intel Iris Graphics Extensions to DirectX API

    An added feature with 4th generation processor graphics is the API set for DirectX extensions. Two APIs are available that provide for pixel synchronization and instant access. Pixel synchronization lets you effectively read/modify/write per-pixel data, which makes the tasks of programmable blending and order independent transparency (OIT) more efficient. Instant access lets both CPU and GPU access the same memory for mapping and rendering. These APIs work on DirectX 11 and above.

    For more detailed information, please refer to the Graphics Developers Guide.

    Security

    Ultrabook systems with 4th gen processors come with enhanced security features like Intel® Platform Trust Technology, Intel® Insider, and Intel® Anti-Theft technology(iii). The processors also feature Intel® Identity Protection Technology(iv), which provides identity protection and fraud deterrence.

    Developer Recommendations


    Developers looking to take advantage the new features explained above can use the following guidelines for programming on 4th gen processors with Windows 8.

    1. Optimize apps for touch: Ultrabook systems with 4th gen processors all include touch screens. Developers should visit these UX/UI guidelines to optimize their app design and enable touch.

      More resources:

    2. Optimize apps with sensors: 4th generation processor-based platforms come with several sensors: GPS, Compass, Gyroscope, Accelerometer, and Ambient Light. These sensor recommendations are aligned with the Microsoft standard for Windows 8. Use the Windows sensor APIs, and your code will run on all Ultrabook and tablet systems running Windows 8.

      More resources:

    3. Optimize apps with Intel platform features: While Windows 8 allows for both Desktop and Windows Store apps, there may be a difference in how platform capabilities are exposed for each type. For Desktop applications, key features are Intel® Wireless Display (WiDi)(v)and security features such as Intel Anti-Theft Technology and Intel Identity Protection Technology while HD Graphics is available for both types of apps. Please refer to resources below for more information on each.

      More Resources:

      On the Windows UI mode, key enablers are Connected Standby, HD Graphics, stylus input to support tablet usages and camera. Please refer to resources below for more information on each:

      More Resources:

    4. Optimize for visible performance differentiation: Desktop apps can be optimized to take advantage of Intel AVX 2.0. Intel Quick Sync Video encode and post-processing for media and visual intensive applications. Note that the Intel Media SDK and Intel Quick Sync Video are available for Windows Store apps to take advantage of as well.

      More Resources:

    5. Optimize apps with capabilities from the Intel® Perceptual Computing SDK: The Intel AVX 2.0 capabilities built into the 4thgen processor provide for face recognition, voice recognition, and other interactive features that provide very compelling usages for Desktop apps.

      More resources:

    6. Optimize app performance with Intel® tools: Check out the Intel® Composer XE 2013 and Intel® VTune™ Amplifier XE 2013 for Windows Desktop. These suites provide compilers, Intel® Performance Primitives and Intel® Threaded Building Blocks that help boost application performance. You can also optimize and future-proof media and graphics workloads on all IA platforms with the Intel® Graphics Performance Analyzers 2013 and Intel Media SDK that are available for both Desktop and Windows Store apps.

      More resources:

    About the Author


    Meghana Rao is a Technical Marketing Engineer with the Developer Relations Division. She helps evangelize Ultrabook™ and Tablet platforms and is the author of several articles on the Intel® Developer Zone.

    (i) 3rd Gen Intel® Core™i7-3667U processor, Intel HD Graphics 4000, Tacoma Falls 2 reference design platform, 2x4GB DDR3L-1600, 120GB SSD, 13.3” enhanced display port panel with 1920x1080 resolution, 50 WHr battery, Windows* 8.

    4th Gen Intel® Core™i7-4650U processor, Intel HD Graphics 5000, pre-production platform, 2x2GB DDR3L-1600, 120GB SSD, 13.3” enhanced display port panel supporting panel self refresh with 1920x1080 resolution, 50 WHr battery, Windows* 8.

    (ii) 3rd Gen Intel® Core™i7-3687U processor, Tacoma Falls 2 reference design platform, Intel HD Graphics 4000, Intel HD Graphics driver 15.31.3063, 2x2GB DDR3L @ 1600MHz, 120GB SSD, 13.3” enhanced display port panel with 1920x1080 resolution, 50 WHr battery, Windows* 8.

    4th Gen Intel® Core™i7-4770R processor, pre-production platform, Intel 5200, Intel HD Graphics driver 15.31.3071, 2x2GB DDR3L @ 1600MHz, 160GB SSD, Windows* 8.

    (iii) No system can provide absolute security under all conditions. Requires an enabled chipset, BIOS, firmware, and software with data encryption, and service activation with a capable service provider. Consult your system manufacturer and service provider for availability and functionality. Service may not be available in all countries. Intel assumes no liability for lost or stolen data and/or systems or any other damages resulting thereof. For more information, visit www.intel.com/content/www/us/en/architecture-and-technology/anti-theft/anti-theft-general-technology.html.

    (iv) No system can provide absolute security under all conditions. Requires an Intel® Identity Protection Technology-enabled system, including a 2nd gen or higher Intel® Core™ processor enabled chipset, firmware and software, and participating website. Consult your system manufacturer. Intel assumes no liability for lost or stolen data and/or systems or any resulting damages. For more information, visit http://ipt.intel.com.

    (v) Requires an Intel® Wireless Display enabled PC, compatible adapter, and TV. 1080p and Blu-Ray* or other protected content playback only available on 2nd generation Intel® Core™ processor-based PCs with built-in visuals enabled. Consult your PC manufacturer. For more information, see www.intel.com/go/widi.

    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Copyright © 2013 Intel Corporation. All rights reserved.

    *Other names and brands may be claimed as the property of others.

    OpenCL and the OpenCL logo are trademarks of Apple Inc and are used by permission by Khronos.

  • Intel 4th Generation Core Processor
  • Haswell
  • power
  • graphics
  • Developers
  • Microsoft Windows* 8
  • Windows*
  • Beginner
  • Laptop
  • Tablet
  • URL
  • Codemasters GRID 2* on 4th Generation Intel® Core™ Processors - Game development case study

    $
    0
    0

    Downloads

    Codemasters GRID 2* on 4th Generation Intel® Core™ Processors - Game development case study PDF [1.34 MB]

    Abstract


    Codemasters is an award-winning game developer and publisher, with popular game brands like DiRT*, GRID*, Cricket*, and Operation Flashpoint*. With GRID 2, Codemasters wanted to deliver a compelling high-end experience on 4th generation Intel® Core™ processors even on low power Ultrabook™ systems. On top of that, GRID 2 includes power-friendly features to improve and extend the gaming experience when playing on the go with an Ultrabook device.

    Codemasters collaborated with Intel to make the most of the wide range of performance options available in systems running 4th generation Intel Core processors. As a result, Codemasters shipped GRID 2 with fantastic visual quality, increased performance, and significant improvements in power management and mobile features. The game looks and runs its best on Ultrabook devices with 4th gen Intel Core processors. GRID 2 uses two advanced features that are only made possible using the new Intel® Iris™ Graphics extension for pixel synchronization. With pixel synchronization, GRID 2 uses adaptive order independent transparency (AOIT) on the game’s foliage and adaptive volumetric shadow mapping (AVSM) for efficient self-shadowing particles. With both features together, the GRID 2 game artists had greater control than ever to create an immersive world in the game. With GRID 2 running on PCs with 4th gen Intel Core processors, gamers have a high-performance experience that looks fantastic and plays great.

    4th generation Intel Core processors bring big gains for GRID 2


    With the introduction of 4th gen Intel Core processors, Intel delivered several technology advances that Codemasters had been looking for. With the processor’s advancements in graphics technology, improved CPU performance, and the Intel Iris Graphics extensions to DirectX* API, Codemasters had the basis for outstanding features and performance in GRID 2, as well as a strong collaboration with Intel.

    With the graphics extensions, game developers have two new DirectX 11 extensions at their disposal, supported on 4th gen Intel Core processors.

    • The first is the Intel Iris Graphics extension for instant access, which lets the graphics driver deliver a pointer to a location in GPU memory that can also be accessed directly by the CPU. Previously, accessing the GPU’s memory would have resulted in a copy, even though the CPU and GPU share the same physical memory.
    • The second extension is the Intel Iris Graphics extension for pixel synchronization, which enables programmable blend operations. It provides a way to serialize and synchronize access to a pixel from multiple pixel shaders, and guarantee that pixel changes happen in a deterministic way. The serialization is limited to directly overlapping pixels, so performance remains unchanged for the rest of the code.

    Since the extensions were new for 4th gen Intel Core processors, they hadn’t been used in a shipping game before, so we set out to learn the very best ways to use these extensions for GRID 2.

    Codemasters was also interested in the power improvements that the 4th gen Intel Core brings to PC gaming. With longer battery life and better stand-by times, the Ultrabook platform makes an even more compelling gaming environment. Historically, Codemasters did not optimize for power efficiency. With GRID 2, they consistently deliver equivalent or better visuals, while using less power. GRID 2 players win, with longer play times on battery.

    Getting it in tune: Foliage and particles needed help

    Codemasters wanted GRID 2’s graphics to shine on Intel systems, but we had some challenges. To make the game as realistic as possible, we used a particle system for smoke and dust effects from the tires. The tire smoke originally cast a simple shadow on the track, but the smoke effect didn’t shadow itself and had no proper lighting. It relied on artist-created fake lighting, baked into the textures. For years, the artists at Codemasters have been asking for more realistic lighting for their particle systems, but the performance implications had always made it prohibitive. We knew there were better options, and the new processor has given them to us.


    Figure 1.
    Smoke particles before optimization


    In addition to the game’s signature city racing circuits, GRID 2 has several tracks that pass through countryside, featuring dense foliage along the track. This foliage needs to combine with complex lighting to make the racing environment’s atmosphere feel realistic and immersive. Foliage needs to use transparency along its edges to appear realistic and avoid pixel shimmer, especially on moving geometry. In order to render transparent geometry correctly, you must render it in a specific order, which can be impractical in complex real-time scenes like those in GRID 2. An alternative is to use Alpha to Coverage, but that requires multisample anti-aliasing (MSAA), which comes with a performance cost and still has artifacts compared to correct alpha blending.

    Figure 2.Foliage before optimization, showing detail on the right


    Existing solutions to these challenges require a discrete graphics card, and often run brutally slow since they are very computationally heavy. Codemasters needed solutions that were as efficient as possible, and Intel delivered.

    PixelSynchronization: Bringing new performance to existing algorithms
    Both self-shadowing of particles and correct foliage rendering have one thing in common, they are problems that require data to be sorted during rendering. Shadows must be sorted with respect to the light source, and the foliage must be sorted relative to the viewer. One solution is to use DirectX 11 and unordered access views (UAVs). Because of limitations with the way atomic operations can be used, however, the algorithms either require unbounded memory or can result in visual artifacts when memory limits are reached. UAVs also can’t guarantee that each frame will access pixels in the same order, so some sorting is required in order to prevent visual artifacts between frames.

    The Intel Iris Graphics extension for pixel synchronization gives graphics programmers new flexibility and control over the way that the 3D rendering pipeline executes pixel shaders. Intel researchers used this capability to design algorithms that solve three long-standing problems in real-time graphics:

    • Order-independent transparency
    • Anti-aliasing of complex scene elements such as hair, leaves, and fences
    • Shadows from transparent effects such as smoke

    Unlike previous approaches, Intel’s algorithms with pixel synchronization use a constant amount of memory, perform well, and are robust enough for game artists to intuitively use them in a wide range of game scenes. Because pixel synchronization also guarantees any changes to the UAV contents are always ordered by primitive, they’re consistent between frames. This means that games can now use order-dependent algorithms. Intel published earlier versions of these algorithms in the graphics literature two to three years ago, but they have not been practical to deploy in-game until the advent of pixel synchronization on 4th gen Intel Core processors. The published algorithms are called adaptive order-independent transparency (AOIT) and adaptive volumetric shadow maps (AVSM).

    Smoke particle shadow and lighting: Using pixel synchronization for AVSM

    The smoke particle effects are central in GRID 2, so this was an obvious place to apply AVSM. With this feature added, the smoke particles realistically cast shadows on themselves and the track. Artists have greater control over how the particles are lit and shadow themselves, so they have great visual impact.

    "The artists working on 'GRID 2' have been requesting this type of effect for years, and prior to this, it wasn't possible to achieve it at a reasonable cost," said Clive Moody, senior executive producer at Codemasters Racing*. "The fact that this capability will be available to millions of consumers on forthcoming 4th generation Intel Core processors is very exciting to us."

    A PC-only particle system showcases this result.


    Figure 3.
    Smoke particles with AVSM, showing self-shadowing


    Because AVSM combines transparent results in a space-efficient way, there is some compression. You might think that AVSM could introduce unacceptable compression errors, but in practice, visual quality is very good. More importantly, the effect is deterministic since the pixel synchronization ensures pixels are committed in the same order on each frame. This avoids problems with shimmering and flickering that can be introduced by related techniques.

    The first implementation of AVSM in GRID 2 used 8 nodes, and performed all lighting calculations on a per-pixel level using the resolution of the current particle system (normally smaller than the actual screen size). Bilinear sampling smoothed out artifacts when viewing a stationary smoke plume in a replay camera. This first implementation was fast enough in game on higher end systems with Iris Pro Graphics, but with cars having multiple emitters (4+ per car) it took 8 ms to create a shadow map and up to 18 ms to resolve for each. This gave a worst-case of about 100 ms per frame for adding AVSM, so improvements were needed if this feature was to be enabled by default.

    The AVSM node itself was improved, so that 4 nodes could be used instead of 8 with no noticeable visual change. On top of that, a major improvement in performance and quality came from adding vertex shader tessellation, with per-vertex lighting. This avoids sampling the AVSM data structures at a more expensive per-pixel level. GRID 2 implements screen space tessellation in the domain shader and then uses faster per-vertex lighting evaluation to sample the shadow map. By using screen space tessellation, we ensure that large particle quads near the front of the screen are broken down into smaller triangles, while small or distant particles are left relatively untouched. The results are nearly identical visually, and performance is improved, especially for the worst-case scenarios such as replaying while focusing on the car doing a wheel spin.

    Once particle self-shadowing was added, it became clear that the individual particles weren’t sorted correctly when drawn on the screen. Originally, the game had sorted particles back-to-front within an emitter, so the transparent particles would render correctly. With multiple emitters per car, however, it was possible for far smoke plumes to be drawn on top of near ones.

    Figure 4. Problem - unsorted smoke particles with AVSM, with far smoke plumes on top of near ones

    This wasn’t a problem before because the original art was uniform. At first, we planned to solve this with pixel synchronization. We created a working version of the AOIT algorithm (described below) to do this, but since the particles are all screen-space-aligned, they can simply be sorted on the CPU instead. This was faster than a pixel synchronization solution, since it used spare performance on the CPU.

    The final piece of the lighting puzzle was to integrate the AVSM shadow system with Beast* lighting from Autodesk. Beast lighting is used to light the rest of the geometry, which means the AVSM shadow map must pick up the recalculated lighting data, so that smoke trails will darken under bridges or pick up light sources around the edge of the track.

    While AVSM still has a run-time cost, after optimizations it was well within the budget for visual impact. The worst-case scenario was sped up almost 4x. Typical performance is about 0.7 ms per shadow cascade with a 0.4 ms resolve stage, using about 200K pixels on a quarter screen render target. AVSM is enabled by default on high presets; the algorithm can also be switched off and on with the Advanced Settings menu on any 4th gen Intel Core processor-based system.

    Foliage transparency: Using pixel synchronization for AOIT

    Codemasters’ racing titles have a long history of attractive outdoors scenery, with the DiRT franchise pushing artistic boundaries creating realistic off-road environments. While GRID 2 doesn’t go off-road, there are still plenty of tracks that show off stunning point-to-point circuits.

    Figure 5. The Great Outdoors, showing off the stunning scenery


    Codemasters wanted their artists’ work to shine. Transparency on the foliage edges is one part of creating a realistic look and feel. Originally, the only way to get soft edges was to use Alpha to Coverage with high levels of MSAA enabled. This ran very slow, and Alpha to Coverage doesn’t provide depth to densely packed trees. Codemasters turned to AOIT to get the transparent edges of the foliage looking their best, while also running faster and improving the look of the dense forest sections. No changes were required to the art pipeline.

    Figure 6. Foliage with AOIT, showing soft edges in the detail on the right


    It took about 5 ms to render the trees in an area of the track with heavy foliage, which was a significant chunk of a frame. When it was first implemented, AOIT pushed that to 11 ms. This approached the time to run MSAA, so this was too long. Optimizations reduced this significantly.

    The initial AOIT implementation used 4 nodes to store the transparency information. It also used a complex compression routine (similar to the one used for AVSM) that took into account the difference in area beneath a visibility graph. Experiments showed that for typical scenes sorted relative to the viewer, a much simpler algorithm could be used since the depth played a smaller part in the visibility decision. Further experiments showed that 2 nodes were enough to store that data. This allowed both color and depth information to be packed into a single 128-bit structure, rather than separate color and depth surfaces. AOIT’s performance was further improved by using a tiled access pattern to swizzle the elements of the UAV data structure, making memory access more cache-friendly. In total, this nearly doubled the performance of AOIT, bringing it down to 2-3 ms on complex foliage heavy scenes and much less on scenes with light foliage.

    While AOIT proved a good solution for the complex foliage, it still presented some issues. Ideally, all transparent objects would get rendered with the same AOIT path. This would have been expensive since some transparent objects like god rays were already alpha-blended to a large part of the screen and rendered with a traditional back-to-front pass. Combining the two techniques initially created draw-order problems, since it’s difficult to combine traditional back-to-front transparency rendering with AOIT.

    We wanted to keep the efficiency of the back-to-front render for objects that could easily be sorted, while gaining the flexibility of using AOIT on complex intersecting geometry. The solution turned out to be fairly elegant. First, render AOIT without resolving to the screen. Then, execute a back-to-front traditional pass of transparent objects. Anywhere a traditionally rendered object interacted with a screen-space pixel from the AOIT pass, that object was added to the AOIT buffer instead of being rendered. Finally, they’re all resolved. This approach works great, as long as the AOIT objects don’t cover a large part of the screen at the same time as a standard object. This approach allowed ground coverage and god-rays to correctly interact with the tree foliage with only a minimal performance impact. In the end, the AOIT became so efficient it was added to other objects that suffered from aliasing, such as the chain link fences. This allowed for thin geometry to fade out into the distance gracefully, rather than becoming noisy and aliased.

    Figure 7. Fences on the left show aliasing in the distance, AOIT improves fences on the right


    At first, AOIT didn’t work right when MSAA was also enabled. AOIT needs to account for pixels rendered at higher sample frequency at triangle edges. It’s not enough to simply add partially covered pixels into the AOIT buffer with a lower alpha value since they won’t blend properly. These pixels have to be handled separately, adding to the time to compute them. Otherwise, they can reinforce each other and give a double darkening around edges. The solution for GRID 2 was to do this partially, to get the right balance between correctness and compute time.

    AOIT is enabled at Medium quality settings and above, and it can be switched off and on with the Advanced Settings menu. GRID 2 uses Medium quality settings by default on all 4th gen Intel Core processors.

    Instant access: Lessons learned
    The 4th generation Intel Core processors brought two new extensions to DX11 graphics. Pixel synchronization was heavily used in GRID 2. What about instant access?

    Instant access provides access to resources in memory shared by the CPU and GPU. Since GRID 2 already used direct memory access on the consoles, at first we assumed it would be easy to also use on the PC. Systems, like particles, ground cover, crowd instance data, and crowd camera flashes, all accessed the vertex data. Instead of giving an immediate speedup, instant access actually introduced stalling in the render pipeline. DirectX was still honoring the buffer usage and would wait to unlock the resource if it was already in flight to the graphics engine.

    We could have added manual double-buffering to work around this, but we realized that the driver was already doing a good job optimizing its usage on the linearly-addressed memory, so we weren’t likely to see a large speedup. As a result, instant access wasn’t used in GRID 2.

    We talked about a few ideas that could have given performance boosts, like using instant access for texture memory. GRID 2 doesn’t stream the track data, and only a small number of videos are uploaded during a race, so we didn’t expect a large gain. After that, we focused our attention on pixel synchronization since we had such obvious benefits from that extension in this game.

    Your game may take advantage of instant access in several ways. Instant access might give faster texture updates from the CPU (working on native tiled formats), since your game will avoid the multiple writes that come when the reordering data for the driver. Or you may find major gains accessing your geometry if you have a lot of static vertex geometry with small subresource updates per frame.

    Try it out, and see!

    Anti-aliasing: Big improvements
    Anti-aliasing helps games look great. Multi-sample anti-aliasing (MSAA) is commonly used and supported by Intel graphics hardware, but it can be expensive to compute. Since GRID 2 has a very high standard for visual quality and run-time performance, we weren’t satisfied with performance trade-offs for enabling MSAA, especially on Ultrabook systems with limited power budgets. Together, Intel and Codemasters incorporated a technique we’ll call conservative morphological AA (CMAA).

    While you should look for full details on CMAA in an upcoming article and sample, we’ll outline the basics. As a post-process AA technique, it’s similar to morphological AA (MLAA) or subpixel morphological AA (SMAA). It runs on the GPU and has been tailored for low bandwidth with about 55-75% the run-time cost of 1xSMAA. CMAA approaches the quality of 2xMSAA for a fraction of the cost. It does have some limited temporal artifacts, but looks slightly better on still images.

    For comparison, at 1600x900 resolution with High quality settings, enabling 2xMSAA adds 5.0 ms to the frame, but CMAA adds only 1.5 ms to the frame (at a frame rate of 38.5 FPS). CMAA is a great alternative for gamers who want a nicely anti-aliased look but don’t like the performance of MSAA.

    Figure 8. Original garage on the left shows some aliasing, better with CMAA applied on the right.


    Because CMAA is a post-processing technique, it also works well in conjunction with AOIT, without suffering from the sampling frequency issues discussed above.

    SSAO: A study in contrasts
    GRID 2 contains screen-space ambient occlusion (SSAO) code that runs great on some hardware, but didn’t run as well as we’d like on Intel® hardware. There are different SSAO techniques, and GRID 2 originally used high definition ambient occlusion (HDAO). When we first studied it, it took 15-20% of the frame, which was far too much.

    The original SSAO algorithm uses compute shaders, but CS algorithms can sometimes be tricky to optimize for all variations of hardware. We worked together to create a pixel shader implementation of SSAO that performs better in more cases.

    Figure 9. SSAO turned off on the left, SSAO turned on and running in a pixel shader on the right.


    The CS implementation relies heavily on texture reads/writes. The PS implementation uses more computation than texture reads/writes, so it doesn’t use as much memory bandwidth as the CS implementation. As a result, the PS version of SSAO runs faster on all hardware we tested and runs significantly faster on Intel graphics hardware. While the new version is the default, you may choose either SSAO implementation from the configuration options.

    Looks great, less battery: Minding the power gap
    More gamers than ever play on the go. This poses some special challenges for game developers. To help players keep an eye on their charge while playing, GRID 2 displays a battery meter on-screen. Codemasters used the Intel® Laptop and Netbook Gaming Technology Development Kit to check the platform’s current power level and estimated remaining battery time. When you’re running on battery power, that information is discreetly shown as a battery meter in the corner of the screen.

    When playing on battery, the CPU and GPU workloads each contribute to the overall power use. This makes it a careful balancing act to optimize for power since changes to one area may affect the power use of the other.

    First, we optimized any areas where extra work was being done on the CPU that didn’t affect the GPU.  For example, there were some routines that converted back and forth between 16-bit floats and 32-bit floats. Those routines used simple reference code, but after study, we replaced them with a different version that ran much faster.

    Another CPU power optimization came from the original use of spin locks for thread synchronization. This is very power inefficient; it keeps one CPU core running at full frequency, so the CPU’s power management features cannot reduce the CPU frequency to save power. It can also prevent the operating system’s thread scheduler from making the best thread assignment. Several parallel job systems were rewritten, including the CPU-side particle code. They were changed to reduce the amount of cross-thread synchronization.

    One of the best power optimizations that can be done on a mobile platform is to lock the frame rate to a fixed interval. This lets both the CPU and GPU enter a lower power state between frames. Since GRID 2 was already optimized around a target of 30 FPS on default settings, it wouldn’t have had much effect if we had simply set a 30 FPS frame rate cap. Instead, there’s a special mode added to the front-end options. If power saving is enabled, the game will reduce some visual quality settings when the user is running on battery. Since none of the setting changes require a mode change, they can happen seamlessly during play. These changes raise the average frame rate above 30 FPS, so a 30 FPS frame rate cap is now effective at saving power and prolonging game play on battery.

    Finally, the game’s built-in benchmark now uses power information. When profiling the game over a single run, GRID 2 logs power and battery information as the benchmark loops. If you study these results over time, you can see how power-efficient your current settings are on your benchmark system.

    Conclusions


    Working together, Intel and Codemasters found ways to deliver a fantastic game that looks and runs great on Intel’s latest platforms.

    Now that they can be built on top of pixel synchronization, AVSM and AOIT bring new levels of visual impact along with great performance. Together, they enrich the game environment and give a greater level of immersion than ever before.

    The addition of CMAA brings a new option for high-performance visual quality. Moving SSAO to a pixel shader helps the game run faster. After optimizing usage of the DirectX API with more efficient state caching, optimizing float conversion routines, removing spin locks, and automatically adjusting quality settings and capping the frame rate, the game gets the most out of your battery. GRID 2 also helps gamers keep track of their battery power when they’re playing on the go.

    Adding those together, GRID 2 looks and runs great on Intel’s latest platforms. Consider the same changes in your game!

    References


    Latest AVSM paper and sample: http://software.intel.com/en-us/blogs/2013/03/27/adaptive-volumetric-shadow-maps
    Original AVSM paper and sample:http://software.intel.com/en-us/articles/adaptive-volumetric-shadow-maps
    AOIT paper and sample: http://software.intel.com/en-us/articles/adaptive-transparency
    Laptop and Netbook Gaming TDK Release 2.1: http://software.intel.com/en-us/articles/intel-laptop-gaming-technology-development-kit
    4th Generation Intel® Core™ Processor Graphics Developer Guide:http://software.intel.com/en-us/articles/intel-graphics-developers-guides

    About the author


    Paul Lindberg is a Senior Software Engineer in Developer Relations at Intel. He helps game developers all over the world to ship kick-ass games and other apps that shine on Intel platforms.

    Intel, the Intel logo, Core, and Ultrabook are trademarks of Intel Corporation in the U.S. and/or other countries.
    Copyright © 2013 Intel Corporation. All rights reserved.
    *Other names and brands may be claimed as the property of others.

  • Haswell
  • Core processors
  • Intel® Iris™ Graphics
  • ultrabook
  • GRID 2
  • Intel 4th Generation Core Processor
  • DirectX11
  • CMAA
  • AOIT
  • Developers
  • Microsoft Windows* 8
  • Windows*
  • Intermediate
  • Game Development
  • Graphics
  • Laptop
  • URL

  • UltraDynamo Case Study - App Innovation Contest Entertainment Category Winner

    $
    0
    0

    By William Van Winkle

    Downloads


    Case Study Ultra Dynamo [PDF 662.43KB]

    From Top Gear to Top Winner


    By day, David Auld is an Offshore Installation Manager (OIM) in the oil and gas industry. But when the production platform is humming along without him, Auld indulges his hobby as a devout “petrol-head” (car enthusiast). He also finds time to feed a passion for programming, which led to him earning a 2012 BSC Honours Degree in Computing. Surprisingly, these three facets of the native Scotsman all converged when Auld won the Entertainment Category of the Intel® App Innovation Contest.


    With UltraDynamo, art may have copied life when developer Dave Auld took inspiration from his own Mercedes console.(Source:http://www.mbusa.com/vcm/MB/DigitalAssets/Vehicles/ClassLanding/2013/C/Coupe/Gallery/2013-C-Class-Coupe-Gallery-009_wr.jpg)

    Auld has been a CodeProject member for nearly 10 years. He also takes pride in owning a Mercedes-Benz C63 AMG sedan, the latest in his long line of personal sports cars and one particularly blessed with a graceful and classic dash console. Perhaps this was in the back of Auld’s mind when he noticed CodeProject advertising the Intel App Innovation Contest. He read through several of the proposals and thought, “There must be something I can come up with...” From there, all it took was watching a Top Gear rerun featuring the Bugatti Veyron and its horsepower indicator. “How did Bugatti do that?” he wondered. In working toward an answer, Auld stumbled further into a series of questions and revelations that led to his award-winning success only weeks later.

    The UltraDynamo App: Form and Function


    UltraDynamo is a Microsoft Windows* Desktop application that uses many of the Ultrabook™ device platform’s sensors to provide motor sports enthusiasts with performance data about their vehicles. As shown in the screen capture below, UltraDynamo offers a range of readouts, including x-, y-, and z-axis accelerometers, a compass rose, a speedometer, inclinometers, and gyrometers. These might be presented as charts, pictures, numeric readouts, and so on. The data for each of these springs from various Ultrabook device sensors, including the accelerometer, gyrometer, inclinometer, and a Global Positioning System (GPS) sensor. In short, UltraDynamo presents a configurable on-screen dashboard.


    Without real sensor data on hand, Auld saved ample development time by simulating input values. This screen capture shows a typical simulation dialog box and its effect on the main dashboard.

    In looking at the application, Auld’s priority was clear: Keep the front end as clean and simple as possible to minimize key entry by the user. (Obviously, requiring manual interaction while the user is behind the wheel would be undesirable.) In the same vein, he understood that different users would come to the app with different needs and priorities. The UI should reflect that. Thus he broke out individual readout functions into separate window elements that users could reposition and resize as desired.

    For Auld, this UI simplicity should also be reflected in the program’s responsiveness. “Usability is key,” he said. “Users want that reward: when they click, the app does what’s expected. That’s what will keep them wanting to use the program.”


    The UltraDynamo app relies on a flexible dashboard interface featuring a range of gauges, including compass heading, acceleration, speed, and horsepower.

    Auld developed UltraDynamo on a pair of PCs running Windows 8 Pro, one desktop and one laptop. Neither had any sensors but both had copies of Microsoft Visual Studio* 2012 Pro. Once his concept application was accepted, CodeProject contacted Auld to confirm that he felt he could provide a working application for the competition. When both agreed that it was feasible, CodeProject sent Auld a sensor-equipped Ultrabook device with Windows 8 Pro and Visual Studio 2012 Pro. Auld noted that in order to keep all of his development systems “in check,” he used VisualVSN Server* as a source code library. This library is hosted by a cloud provider on a Windows 2008 R2 virtual machine.

    “I used ankhSVN plugin for Visual Studio,” he added. “It was a simple case of checking in any code changes on one system following any edits, then updating the source to the latest version on the others. This worked well as a way to manage the source from a multi-system single developer point of view.”

    Challenges Addressed During Development


    One of the first obstacles Auld had to conquer was a lack of resources offering suggestions on how to handle sensor data. For example, after getting a temperature value, what does the programmer do with it? Ultrabook devices and their many sensors are relatively new to the market, so there isn’t a large bed of third-party examples and advice to follow beyond Intel’s own Ultrabook™ and Tablet Windows* 8 Sensor Development Guide and the Windows 8 code samples Intel offers. Auld had to figure out many of the answers on his own.

    His first such problem was the original program interface. It was, as he put it, “just a bunch of random numbers on the screen.” He needed gauge controls to mimic actual dashboard readouts. At first, he tried to design these on his own, but it soon became clear that there wasn’t enough time to build what he wanted from scratch. He searched the Web, cast about the CodeProject site, and finally unearthed a license-free dial control called Aqua Gauge, written by Ambalavanar Thirugnanam. This dropped easily into Auld’s code and became the backbone on which the other UltraDynamo controls were built.

    Auld also found that frequent accelerometer sensor updates were causing an event flood, which in turn stalled the interface graphics. Through trial and error, he worked to change the time intervals for eventing data. Finally, he got the display working and stable, although he hopes to return to it for further tweaking. Rather than poll data on a fixed interval, Auld wants to see the app work on a more intelligent feedback loop wherein the app doesn’t request more data until the graphics system is ready to handle it.


    UltraDynamo’s Configuration tab offers a range of input frequency settings for the Ultrabook™ device’s various sensors.

    As mentioned earlier, Auld’s day job experience played into his UltraDynamo development. After having his proposal accepted for the Intel contest, Auld had to wait to receive his Ultrabook device, and during this time his job required him to go offshore for many days. Fortunately, his background as a control systems manager found him frequently building simulations so that the graphics could be tested without requiring the production plant’s systems to be available. The same methods applied here. He wrote the graphics first, created a dummy set of data, and worried about the sensors later.

    “It was simple,” said Auld. “Put a bunch of sliders onto a form and group them into the relative component, whether it was accelerometers, gyrometers, or whatever. That allowed me to manipulate the graphic as part of my testing without actually having hardware sensors available to me. That was a significant benefit. Otherwise, I would have had to spend several days writing code, then get the Ultrabook from Intel and find that nothing worked. I would have lost a huge amount of time. Let's program for the graphics and write it in such a way that I can just plug in the sensors at a later date and, in theory, it should all work nicely.”

    Auld had to take some educated guesses on the data boundaries the Ultrabook device sensors would generate, but once he finally received the device and got started, it all worked fine. Fortunately, his long career and ample experience with touch and sensor development helped him to steer clear of any major issues in these areas.

    UltraDynamo’s last major hitch revolved around the MSI installer required for submission to the Intel AppUp® center. Originally, Auld intended to generate the package from the InstallShield* Lite tool that comes bundled with Microsoft Visual Studio 2012. However, no amount of banging his head against the application helped him understand how to generate an MSI package directly. No matter what he tried, all he could get from the program was an .EXE installer, which the Intel AppUp® center wouldn’t accept. Finally, Auld did find a way to “double-install” into an MSI package, but the Intel AppUp center wouldn’t accept that either. Apparently, examination by Intel techs in a test environment revealed that “the shortcuts that the application installed weren’t announced shortcuts.”

    “To this day, I haven’t got a Scooby what that means,” admitted Auld.

    Fortunately, Intel came to Auld’s rescue. Tech support staff sent him an alpha version of a tool they used internally for app store packaging that relied on WIX* as its underlying toolset for generating installer packages.

    “After working out how the Intel-provided app ran a couple of the WIX underlying commands to generate the MSI package, I took the XML file that the tool had created and used it as a foundation. I tweaked the internal XML nodes, got my shortcuts displayed on the screen, and then manually ran the WIX underlying commands to generate the MSI package. This then went through verification at Intel without any issue.”

    All told, Auld spent about four weeks designing UltraDynamo while working a full-time job. This was broken up with all-consuming work on his production platform, waiting for verification from Intel on different code fixes and so forth. It was a tense, utterly time-constrained process, but it forced him to focus on what was essential for meeting milestone deadlines and to find solutions within his limitations. The lessons here for a part-time, lone programmer were significant.


    Simple but effective, this graph shows UltraDynamo’s graphing of real-time data from X, Y, and Z axis accelerometer sensor inputs.

    Lessons Learned, Advice Given


    UltraDynamo went on to win the Intel App Innovation Contest’s Entertainment category, but that doesn’t mean the application is finished. Auld said he had to leave many ideas on the drawing board because of time constraints, and the UI that did emerge was largely tailored to his own interests. He would like to see the app develop “workspaces” in which users could customize their dashboards and save them like profiles. He would also like to find more professional-looking gauges before commercializing the software.

    UltraDynamo’s development was much like any other app development, fraught with its own complications, delays, and breakthroughs. “Maybe it's frustrating,” said Auld, “but it does help you to think for yourself, and to try things and dig deeper. In the process, you become proficient.”

    He encourages other developers to be willing to learn, experiment, and fail. As an apprentice, when learning the systems on a new platform, Auld had to figure out all of the plumbing and parts and systems on his own. Supervisors would steer and make sure he “didn’t do anything stupid,” but it was an environment for the inquisitive, adventurous, and self-motivated.

    Auld says that such a mindset is becoming increasingly rare in a time when young programmers would rather be spoon-fed code than take five minutes to write something and see if it works. On CodeProject, Auld tries to point people in a direction and encourage them to reverse-engineer what other people have done instead of saying, “There’s your 25 lines of code. Get on with it.” Every project involves a learning and research phase. Expect it, don’t look for shortcuts, and keep the results of these learning processes in a personal code library.

    Roll with what you’ve got. Given his time constraints, Auld had to use some generic car images as part of the interface’s readouts. He hopes to expand this image set in the future.

    Even before starting to code, Auld recommends that developers write out the app they have in mind as a narrative. Approach it as a technical article. By creating at least a plan in bullet-point form, it forces the developer to break down the application’s structure and functionality, which in turn helps offer more guidance in the application’s development. Writing an application as an article will force the developer to think about what he or she is trying to convey to the end user and the ways in which those priorities can be best communicated.

    Finally, Auld encourages developers to “just dive in.” Try and fail. Don’t be afraid to ask questions and get involved on sites such as CodeProject. Auld admits to being little more than a silent trawler for his first seven or eight years on the site. Armed with enough years of slow but sure learning, he was finally ready to become more active and give back into the community. He adds, “That is an important thing for people to do. Don’t just take all the time, but give back, as well.”

    Resources


    Auld provides extensive details on the processes and tools he used in constructing UltraDynamo in his five-part CodeProject article. This shows his path from Visual Studio setup through code signing and packaging. Along the way, he also investigated online coding tool reseller ComponentSource and, as noted earlier, resources found at CodeProject ultimately formed the foundation for UltraDynamo’s interface.

    Auld stresses that he couldn’t have won his contest category without help from Intel’s forums, Intel tech support, and, most of all, the developer community. “Without the inspiration and help of notable gurus on CodeProject like Pete O’Hanlon, who helped manage the sensors, this wouldn’t have happened. The code I had written was garbage in comparison. Listening to other people is so important.”

     

     

     

    Portions of this document are used with permission and copyright 2012 by CodeProject. Intel does not make any representations or warranties whatsoever regarding quality, reliability, functionality, or compatibility of third-party vendors and their devices. For optimization information, see http://software.intel.com/en-us/articles/optimization-notice/. All products, dates, and plans are based on current expectations and subject to change without notice. Intel, the Intel logo, Intel AppUp, and Ultrabook are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. Copyright © 2013. Intel Corporation. All rights reserved

  • UltraDynamo
  • app innovation contest
  • CodeProject
  • ultrabook
  • sensors
  • Microsoft Windows* 8
  • Windows*
  • Microsoft Windows* 8 Desktop
  • User Experience and Design
  • Laptop
  • Desktop
  • URL
  • Ballastic Game Case Study - App Innovation Contest Game Winner

    $
    0
    0

    By Edward Correia

    Downloads


    Ballastic Game Case Study[927 KB]

    Introduction


    In October 2012, Matthew Pilz, a programmer from rural Wisconsin, accepted a challenge. He had been lurking around the CodeProject website doing research when he came across a blog post by Lee Bamber, programming legend and founder of TheGameCreators.com, urging people to take part in the Intel® App Innovation Contest. Using a game idea that had stuck with him from years earlier, Pilz entered the contest, and in two months submitted his app, Ballastic. It won the gaming category's top prize.

    About the Ballastic App

    Ballastic was inspired by CANO-Lab'sPendulumania*, a Japanese freeware game that Pilz had played in the late 1990s. The object of the game was to collect sparkling targets for points. Ballastic expands on that idea by adding rewards for capturing certain targets and for avoiding hazards and challenges that increase in number along with the difficulty levels. “I thought that would be the perfect platform using the touch interface to create the vision in my head,” Pilz said.


    Figure 1.As Ballastic game levels advance, balls are added to the game piece to increase the difficulty of movement.

    In addition to participating in the competition, Pilz said the original goal of Ballastic was to make a casual game that could easily be learned and played by people at any age or skill level. "I didn't have any specific target demographics. Simplicity is my goal with every app I develop," said Pilz.

    Mission accomplished. Pilz has received feedback from both kids and adults saying that Ballastic is a great game that they have a good time playing.


    Figure 2.The Ballastic start screen.

    Since this was the first application Pilz had developed for Intel's Ultrabook™ device platform, he wanted to take advantage of as many of the platform's unique capabilities as he could and use them in unique ways. As the game progresses, the backgrounds change and the difficulty increases, similar to Lima Sky's Doodle Jump*, which also presents just a single level that gets harder and harder. Ballastic's backgrounds change in another unique way. "The light sensor capability of the Ultrabook device is one of the coolest features than I have seen on any development environment. That capability allowed me to actually create different themes and different graphics depending on the level of light in the room."

    By using the ambient light sensor in this way, Pilz was able to program the game to take on different appearances based on where it's being played. If the game is played outside in the sunlight, it will have a contrasted appearance, compared to being played in a room at night where it will have a totally different look. And for indoor play, backgrounds can vary along with screen brightness based on how much light is in the room. "I thought that would be an interesting concept," said Pilz.

    Pilz already had experience with touch, having built several apps for the Apple iPad*, iPhone*, and other handheld and mobile devices. But he was naturally excited to have access to the increased computing resources afforded by the Ultrabook device platform. "Obviously the increased hardware support and graphical capabilities of Ultrabook provide an environment for more graphically rich and CPU-intensive applications than what you can get currently on mobile devices."

    The increased processing power and feature set of the Ultrabook device allowed Pilz to add more shine and polish to Ballastic than could have been easily achieved on mobile devices. The high resolution and wide-screen display allowed him to design assets natively for 1600x900 and higher resolutions at a 16:9 aspect ratio, whereas most mobile devices require that the game be 1024x768 or lower, and often at a 4:3 aspect ratio, thus diminishing the overall style and available game space. The graphical assets and sprite effects would likewise suffer from performance and visual discrepancies if translated to a less powerful platform. Most notably, the fluid background effects and dynamic particles seen throughout the game—comprising hundreds of sprites with constantly changing properties—would not perform as well on a mobile device without requiring significant optimizations and reduction in quantity.

    Another benefit of the Ultrabook device is that its GPU supports a much higher refresh rate than the 30 to 60 frames per second that can be achieved on most other platforms. “Although a lower frame rate is certainly acceptable, being able to uncap Ballastic so that it runs at several hundred frames per second on the Ultrabook allowed me to provide exceptionally smooth and responsive controls and animations throughout the game,” said Pilz.

    The Challenges of Creating Ballastic


    Pilz faced several major challenges when developing Ballastic for the Ultrabook device, including the following :

    • A small window of time in which to develop, test, and submit the app
    • An unfamiliarity with the Ultrabook device platform
    • Adapting to differences in touch development between iOS and Windows* 8

    Developing, Testing, and Submitting the App

    Pilz started in mid-October, about one month before the November 20 deadline, so he spent about a month-and-a-half of full-time, solid work—including several sleepless nights—to get his app to where he felt comfortable releasing it. Pilz had made an early design decision to make the app touch compatible, so its buttons and controls had to be large enough to allow users to touch them without a problem. He built an on-screen keyboard for entering high scores, and everything within the game can be accessed purely by touch. Pilz also incorporated the Ultrabook device keyboard and mouse as optional inputs for gameplay and menu navigation.

    All-in-all, Pilz felt the experience of implementing touch controls on Windows 8 was similar to that of iOS, but there were a few bumps. “The biggest obstacles I found were design-oriented instead of SDK- or programming-related,” he said. One challenge was to ensure that all functionality could be interacted with easily and intuitively via touch. This meant bigger buttons and icons than what would be necessary if supporting only mouse input. The touch sensor functions were abstracted out nicely using the App Game Kit* (AGK),a set of OpenGL*-based libraries that made it a breeze to implement without having to dive into any of the raw SDK commands for such functionality.

    Although Ballastic uses only single-touch controls, Pilz said it would have been easy to support multi-touch if the game needed to. AGK includes functions such as GetRawTouchCount(), GetRawTouchCurrentX(index), GetRawTouchCurrentY(index), GetRawTouchLastX(index), and GetRawTouchLastY(index) that can interpret as many touches as the device supports.

    Pilz said the native SDK commands provided by Intel also were straightforward. Intel’s SDK uses common touch interfaces provided by Microsoft, including three different mechanisms for handling touch input: WM_TOUCH for supporting Windows 7 and 8 Desktop; WM_POINTER, which is specific to Windows 8 Desktop; and PointerPoint for developing a Windows 8 Modern UI (also known as “Metro”) apps. Ballastic was developed as a Windows 7/8 desktop application, so had it been developed using the standard interfaces it would be most closely associated with WM_TOUCH events. This aspect allows it to run on any modern Windows desktop machine, supporting a greater number of devices than the other two methods. Much like iOS, Windows applications generally store touch information in a simple array or list that can be iterated through to read and interpret the touch points with ease.

    The Ultrabook™ Device Platform

    Fortunately for Pilz, Lee Bamber and the people at TheGameCreators.com, of which Pilz has been an active member since its inception in 1999, were hard at work on the latest beta of AGK, which does most of the game programming's heavy lifting. Pilz followed Bamber’s blog from Intel’s Ultimate Coder Challenge and read that he was adding features to support the various Ultrabook device sensors in beta versions of AGK, which he subsequently released to the community. Pilz knew he had to come up with the best platform and language to develop a game quickly. Bamber's code mapped Ultrabook device functions to a few easy-to-understand functions in both Tier 1 (BASIC) and Tier 2 (C++):

    Sensor Support
    GetNFCExists()
    GetGeolocationExists()
    GetCompassExists()
    GetGyrometerExists()
    GetInclinometerExists()
    GetLightSensorExists()
    GetOrientationSensorExists()

    Notifications
    NotificationCreate()
    NotificationReset()
    GetNotification()
    GetNotificationData()
    GetNotificationType()
    SetNotificationImage()
    SetNotificationText()

    Near Field Communications (NFC)
    GetRawNFCCount()
    GetRawFirstNFCDevice()
    GetRawNextNFCDevice()
    GetRawNFCName()
    SendRawNFCData()
    GetRawNFCDataState()
    GetRawNFCData ()

    Geolocation
    GetRawGeoLatitude()
    GetRawGeoLongitude()
    GetRawGeoCity()
    GetRawGeoCountry()
    GetRawGeoPostalCode()
    GetRawGeoState()

    Compass
    GetRawCompassNorth()

    Gyrometer
    GetRawGyroVelocityX()
    GetRawGyroVelocityY()
    GetRawGyroVelocityZ()

    Inclinometer
    GetRawInclinoPitch()
    GetRawInclinoRoll()
    GetRawInclinoYaw()

    Ambient Light Sensor
    GetRawLightLevel()

    Device Orientation Sensor
    GetRawOrientationX()
    GetRawOrientationY()
    GetRawOrientationZ()
    GetRawOrientationW()

    Touch Sensor
    GetRawTouchCount()
    GetRawTouchCurrentX()
    GetRawTouchCurrentY()
    GetRawTouchLastX()
    GetRawTouchLastY()
    GetRawTouchReleased()
    GetRawTouchStartX()
    GetRawTouchStartY()
    GetRawTouchTime()
    GetRawTouchType()
    GetRawTouchValue()

    Complete documentation of the above commands will be available on AppGameKit.com once the final version of AGK 1.08 is released, as well as within the beta downloads for existing customers.

    For Pilz, this meant that hardware features of the Ultrabook device platform could be tapped using a few simple functions. Best of all, the AGK includes BASIC and native-language support. "Those looking for more power and functionality than the standalone AGK BASIC language can provide are free to create AGK applications using Visual Studio*, Xcode*, Pascal, Eclipse*, and other environments and languages," wrote Pilz on his CodeProject page. He also noted that core commands of the AGK can easily be translated between Tier-1 BASIC and the Tier-2 native languages.

    AGK saved Pilz an enormous amount of time, and he credits Bamber for the success of his project. Because the toolkit’s developers abstracted out all of the Ultrabook device sensors and made them into a high-level API, Pilz was able to create rapid prototypes. He was also able to just call the different commands and read the values from the numerous sensors, including the ambient light level, accelerometer, and touch events, without having to spend days or weeks manually coding in the C++ environment or whatever the native SDK required at that time.

    However, the first few weeks were anything but smooth. “It was nerve-racking because AGK is a closed source library and it wasn’t fully prepared for Ultrabook support when the competition began,” said Pilz. “The AGK developers consistently released new beta builds throughout the competition—sometimes several iterations in a single week—each with increasingly comprehensive support for Ultrabook. Eventually AGK had every feature of the Ultrabook available via simple function calls.” Unbelievably, the final beta came out the day before the competition ended. So Pilz waited until that point to make sure that everything was supported and working on the Ultrabook device before he submitted the app to the Intel AppUp® center. Pilz credits the guys at The Game Creators for working around the clock to make sure this application development kit was compatible with the Ultrabook device. Good thing, too, because additional challenges arose that caused the project to lose time.


    Figure 3.As game play advances and objects are captured, the ball gets bigger and heavier, making it increasingly difficult for the player to maneuver and putting ever more stress on the elastic band that holds the balls together.

    Development Contest Rules

    In addition to several snags with the Comodo certificate issuing system, Pilz hit a wall when submitting his app to the Intel AppUp center, Intel’s Ultrabook device app store. Pilz warns other developers to be sure to dot all i’s and cross all t’s in their application submissions, including the meta data, as Pilz had his application rejected at the 11th hour when he did not include the registered trademark symbol next to the “Ultrabook” product name in his game description. Pilz was not the only applicant to run into that hurdle, but thankfully Intel was quick to approve the app once this minor detail was corrected.

    Testing for Ultrabook Devices

    With the technicalities behind him, Pilz turned his focus on testing and on acquiring the proper tools in the development kit to interface with the Ultrabook device. Having an Ultrabook device himself, he was able to test the sensors that he wanted his app to use.
    The experience overall was a positive one, and Pilz said he'd be glad to do it all again and continue developing apps for the Ultrabook device. "Absolutely, yes. It was a fun and unique experience for me, even having developed for the iPad and Android* and other devices. I just think having that raw, high-end computer power and a good processor and video card [lets people] develop apps that you can't really create on most mobile devices. Even though they’re getting more powerful, they still don't compare to a dedicated Ultrabook."

    It's Not about the Money

    While Ballastic is a free app, Pilz does generate revenue from other applications he’s built with the company he founded, LinkedPIXEL. In the future, Pilz might switch to a free/premium model, offering a free limited version with an upgrade to remove ads or purchase new levels. “My game lends itself well to limitless new levels and power-up expansions, but [charging for it] hasn't crossed my mind too much."
    For developers building apps for today's touch screen mobile devices, Ultrabook devices, and particularly running Windows 8, Pilz believes it’s important to think outside of the box when developing these new-style apps. "If you’re a traditional desktop programmer, you might build an interface that works well with keyboards and mice, but we also need to consider how people might benefit from a slightly different design to make things more efficient for those who may be using alternate input methods.” Pilz also points users to resources such as CodeProject and the Intel® Developer Zone, and, of course, libraries like AGK, and feels that developers should still become familiar with Intel’s SDKs because it's good to know exactly what features and commands the Ultrabook device supports.

    Pilz also recommends developing code with a mind toward reuse, ensuring that interfaces and input controls easily adapt in the future, because of the new types of input technologies that will be available. Developing an interface that actually works well with current technology and is also adaptable will open up the doors to a much wider range of users.

    Helpful Resources


    Pilz relied heavily on the App Game Kit platforms and features for source code libraries developed specifically for the Ultrabook device, its sensors and interfaces. He was originally led to AGK and the App Innovation Contest through contacts made on CodeProject, a collaboration web site for developers. Pilz also made regular use of the Intel® Developer Zone for reference materials related to Intel SDKs.

    About Matthew Pilz


    Matthew Pilz spent his early years in rural Wisconsin, enthusiastic about computers and game development. It all started when his brother got a Commodore 64, and Pilz has been working with computers ever since. With an associate’s degree in E-commerce and web administration from Milwaukee Technical College, a Bachelor of Science in Web Technologies from Bellevue University, and various computer-related technical certificates, Pilz has spent the last decade focused on web design and application development for a variety of platforms. In spring of 2013, Pilz also won a grand prize in the Intel® Perceptual Computing Challenge by creating a prototype application, Magic Doodle Pad, using a perceptual camera and the Intel® Perceptual Computing SDK.

    Portions of this document are used with permission and copyright 2012 by CodeProject. Intel does not make any representations or warranties whatsoever regarding quality, reliability, functionality, or compatibility of third-party vendors and their devices. For optimization information, see software.Intel.com/en-us/articles/optimization-notice/. All products, dates, and plans are based on current expectations and subject to change without notice. Intel, the Intel logo, Intel AppUp, and Ultrabook are trademarks of Intel Corporation in the U.S. and/or other countries.


    *Other names and brands may be claimed as the property of others.

      Copyright © 2013. Intel Corporation. All rights reserved.

    An Introduction to the 4th Generation Intel® Core™ Processor

    $
    0
    0

    Downloads


    Download the Introducing the 4th Generation Intel® Core™ Processor (code-named Haswell) PDF [614KB]

    Abstract


    Intel is launching the 4th generation Intel® Core™ processor, code-named Haswell. Its capabilities build on the 3rd generation Intel® Core™ processor graphics. This introductory article provides a glimpse into the 4th gen processor, with an overview of highlights like the Intel® Iris™ graphics, performance enhancements, low power options, face recognition capabilities, and more. Microsoft Windows* 8 developers will also learn about capabilities available to both Desktop and the Modern UI environments and how to take advantage of the 4th generation processor capabilities.

    Key 4th generation processor features


    The new processor builds on the processor graphics architecture first introduced in 2nd gen Intel® Core™ processors. While they were built with the 32 nm manufacturing process, both 3rd and 4th generation processors are based on the 22 nm technology. The following paragraphs describe the key differences between the 3rd and 4th gen processors.

    First ever System on Chip (SoC) for a PC

    The 4th gen Intel® Core™ processor is the first ever SoC for a PC. System on Chip, or SoC, integrates all the major building blocks for a system onto a single chip. With CPU, Graphics, Memory, and connectivity in one package, this innovative modular design provides the flexibility to package a compelling processor graphics solution for multiple form factors.

    Enhanced battery life

    The 4th gen processor provides up to 9.1 hours of HD video viewing compared to 6 hours on the 3rd gen one. The latest processor also provides 10-13 days of standby power (with refreshed email and social media notifications) compared to 4.5 days of standby power on 3rd generation processors.

    Table 1: Battery life comparison between 3rd Generation and 4th generation Intel® Core™ Processors.

    [Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. Configurations(i) For more information go to http://www.intel.com/performance.]

    Note: TDP, Thermal Design Power, represents worst-case system power.

    Intel® Iris™ Graphics

    Intel Iris Graphics allows you to play the most graphic intensive games without the need for an additional graphics card. The graphics performance on the 4th gen processor nearly doubles the performance relative to the previous generation of Intel® HD Graphics.

    Figure 1: Comparison of graphics performance of 4th gen Intel® Core™ with previous generations

    [ Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. Configurations(ii). For more information go to http://www.intel.com/performance.]

    Additionally, the Intel Iris Pro Graphics with integrated eDRAM provides close to double the performance of the 3rd generation HD graphics GPUs on Ultrabook™ devices. Both GPUs operate at higher than 28W Thermal Design Power (TDP) compared to the 17W TDP of 3rd generation graphics making them more suited for high performance operations on Desktop/AIO/Laptop form factors [Source: http://arstechnica.com/gadgets/2013/05/intels-iris-wants-to-change-how-you-feel-about-integrated-graphics/]

    Intel Iris Graphics also support Direct3D* 11.1, OpenGL* 4.1 and OpenCL* 1.2, Intel Quick Sync Video encoding engine, Intel® AVX 2.0, and DirectX Extensions.

    4th Generation Intel® Core™ processor variants

    Multiple packages of the 4th generation processor are available to cater to the needs of the growing types of systems: Workstations, Desktops, Ultrabook systems, All-In-Ones, Laptops, and Tablets. While the higher end variants targeted for Workstations and Desktops provide higher performance, they also consume higher power compared to the more power optimal mobile variants. The table 2 below provides a comparison of the variants for different form factors and usages.

    Table 2: 4th Generation Intel® Core™ Processor Variants

    The U and Y series are designed for Ultrabook devices, convertibles, and detachable form factors.

    The 4th gen processor line provides the flexibility to match power requirements with graphics performance. While the high end provides substantially better graphics performance, the lower end is suitable when lower graphics performance is required.

    For a detailed analysis of graphics capabilities, please refer to the Graphics Developers Guide.

    Intel® AVX 2.0

    Intel® Advanced Vector Extensions (Intel® AVX) 2.0 is a 256-bit instruction set extension to Intel® Streaming SIMD Extensions (Intel® SSE). Intel AVX 2.0 build on version 1.0 and provides features like Fully Pipelined Fused Multiply Add on two ports thus providing twice the floating point performance for multiply-add workloads, 256-bit integer SIMD operations compared to older 128-bit gather operations and bit manipulation instructions. These capabilities enhance usages such as face detection, pro-imaging, high performance computing, consumer video and imaging, increased vectorization, and other advanced video processing capabilities.

    More resources on Intel AVX 2.0:

    Intel Iris Graphics Extensions to DirectX API

    An added feature with 4th generation processor graphics is the API set for DirectX extensions. Two APIs are available that provide for pixel synchronization and instant access. Pixel synchronization lets you effectively read/modify/write per-pixel data, which makes the tasks of programmable blending and order independent transparency (OIT) more efficient. Instant access lets both CPU and GPU access the same memory for mapping and rendering. These APIs work on DirectX 11 and above.

    For more detailed information, please refer to the Graphics Developers Guide.

    Security

    Ultrabook systems with 4th gen processors come with enhanced security features like Intel® Platform Trust Technology, Intel® Insider, and Intel® Anti-Theft technology(iii). The processors also feature Intel® Identity Protection Technology(iv), which provides identity protection and fraud deterrence.

    Developer Recommendations


    Developers looking to take advantage the new features explained above can use the following guidelines for programming on 4th gen processors with Windows 8.

    1. Optimize apps for touch: Ultrabook systems with 4th gen processors all include touch screens. Developers should visit these UX/UI guidelines to optimize their app design and enable touch.

      More resources:

    2. Optimize apps with sensors: 4th generation processor-based platforms come with several sensors: GPS, Compass, Gyroscope, Accelerometer, and Ambient Light. These sensor recommendations are aligned with the Microsoft standard for Windows 8. Use the Windows sensor APIs, and your code will run on all Ultrabook and tablet systems running Windows 8.

      More resources:

    3. Optimize apps with Intel platform features: While Windows 8 allows for both Desktop and Windows Store apps, there may be a difference in how platform capabilities are exposed for each type. For Desktop applications, key features are Intel® Wireless Display (WiDi)(v)and security features such as Intel Anti-Theft Technology and Intel Identity Protection Technology while HD Graphics is available for both types of apps. Please refer to resources below for more information on each.

      More Resources:

      On the Windows UI mode, key enablers are Connected Standby, HD Graphics, stylus input to support tablet usages and camera. Please refer to resources below for more information on each:

      More Resources:

    4. Optimize for visible performance differentiation: Desktop apps can be optimized to take advantage of Intel AVX 2.0. Intel Quick Sync Video encode and post-processing for media and visual intensive applications. Note that the Intel Media SDK and Intel Quick Sync Video are available for Windows Store apps to take advantage of as well.

      More Resources:

    5. Optimize apps with capabilities from the Intel® Perceptual Computing SDK: The Intel AVX 2.0 capabilities built into the 4thgen processor provide for face recognition, voice recognition, and other interactive features that provide very compelling usages for Desktop apps.

      More resources:

    6. Optimize app performance with Intel® tools: Check out the Intel® Composer XE 2013 and Intel® VTune™ Amplifier XE 2013 for Windows Desktop. These suites provide compilers, Intel® Performance Primitives and Intel® Threaded Building Blocks that help boost application performance. You can also optimize and future-proof media and graphics workloads on all IA platforms with the Intel® Graphics Performance Analyzers 2013 and Intel Media SDK that are available for both Desktop and Windows Store apps.

      More resources:

    About the Author


    Meghana Rao is a Technical Marketing Engineer with the Developer Relations Division. She helps evangelize Ultrabook™ and Tablet platforms and is the author of several articles on the Intel® Developer Zone.

    (i) 3rd Gen Intel® Core™i7-3667U processor, Intel HD Graphics 4000, Tacoma Falls 2 reference design platform, 2x4GB DDR3L-1600, 120GB SSD, 13.3” enhanced display port panel with 1920x1080 resolution, 50 WHr battery, Windows* 8.

    4th Gen Intel® Core™i7-4650U processor, Intel HD Graphics 5000, pre-production platform, 2x2GB DDR3L-1600, 120GB SSD, 13.3” enhanced display port panel supporting panel self refresh with 1920x1080 resolution, 50 WHr battery, Windows* 8.

    (ii) 3rd Gen Intel® Core™i7-3687U processor, Tacoma Falls 2 reference design platform, Intel HD Graphics 4000, Intel HD Graphics driver 15.31.3063, 2x2GB DDR3L @ 1600MHz, 120GB SSD, 13.3” enhanced display port panel with 1920x1080 resolution, 50 WHr battery, Windows* 8.

    4th Gen Intel® Core™i7-4770R processor, pre-production platform, Intel 5200, Intel HD Graphics driver 15.31.3071, 2x2GB DDR3L @ 1600MHz, 160GB SSD, Windows* 8.

    (iii) No system can provide absolute security under all conditions. Requires an enabled chipset, BIOS, firmware, and software with data encryption, and service activation with a capable service provider. Consult your system manufacturer and service provider for availability and functionality. Service may not be available in all countries. Intel assumes no liability for lost or stolen data and/or systems or any other damages resulting thereof. For more information, visit www.intel.com/content/www/us/en/architecture-and-technology/anti-theft/anti-theft-general-technology.html.

    (iv) No system can provide absolute security under all conditions. Requires an Intel® Identity Protection Technology-enabled system, including a 2nd gen or higher Intel® Core™ processor enabled chipset, firmware and software, and participating website. Consult your system manufacturer. Intel assumes no liability for lost or stolen data and/or systems or any resulting damages. For more information, visit http://ipt.intel.com.

    (v) Requires an Intel® Wireless Display enabled PC, compatible adapter, and TV. 1080p and Blu-Ray* or other protected content playback only available on 2nd generation Intel® Core™ processor-based PCs with built-in visuals enabled. Consult your PC manufacturer. For more information, see www.intel.com/go/widi.

    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Copyright © 2013 Intel Corporation. All rights reserved.

    *Other names and brands may be claimed as the property of others.

    OpenCL and the OpenCL logo are trademarks of Apple Inc and are used by permission by Khronos.

    Codemasters GRID 2* on 4th Generation Intel® Core™ Processors - Game development case study

    $
    0
    0

    Downloads

    Codemasters GRID 2* on 4th Generation Intel® Core™ Processors - Game development case study PDF [1.34 MB]

    Abstract


    Codemasters is an award-winning game developer and publisher, with popular game brands like DiRT*, GRID*, Cricket*, and Operation Flashpoint*. With GRID 2, Codemasters wanted to deliver a compelling high-end experience on 4th generation Intel® Core™ processors even on low power Ultrabook™ systems. On top of that, GRID 2 includes power-friendly features to improve and extend the gaming experience when playing on the go with an Ultrabook device.

    Codemasters collaborated with Intel to make the most of the wide range of performance options available in systems running 4th generation Intel Core processors. As a result, Codemasters shipped GRID 2 with fantastic visual quality, increased performance, and significant improvements in power management and mobile features. The game looks and runs its best on Ultrabook devices with 4th gen Intel Core processors. GRID 2 uses two advanced features that are only made possible using the new Intel® Iris™ Graphics extension for pixel synchronization. With pixel synchronization, GRID 2 uses adaptive order independent transparency (AOIT) on the game’s foliage and adaptive volumetric shadow mapping (AVSM) for efficient self-shadowing particles. With both features together, the GRID 2 game artists had greater control than ever to create an immersive world in the game. With GRID 2 running on PCs with 4th gen Intel Core processors, gamers have a high-performance experience that looks fantastic and plays great.

    4th generation Intel Core processors bring big gains for GRID 2


    With the introduction of 4th gen Intel Core processors, Intel delivered several technology advances that Codemasters had been looking for. With the processor’s advancements in graphics technology, improved CPU performance, and the Intel Iris Graphics extensions to DirectX* API, Codemasters had the basis for outstanding features and performance in GRID 2, as well as a strong collaboration with Intel.

    With the graphics extensions, game developers have two new DirectX 11 extensions at their disposal, supported on 4th gen Intel Core processors.

    • The first is the Intel Iris Graphics extension for instant access, which lets the graphics driver deliver a pointer to a location in GPU memory that can also be accessed directly by the CPU. Previously, accessing the GPU’s memory would have resulted in a copy, even though the CPU and GPU share the same physical memory.
    • The second extension is the Intel Iris Graphics extension for pixel synchronization, which enables programmable blend operations. It provides a way to serialize and synchronize access to a pixel from multiple pixel shaders, and guarantee that pixel changes happen in a deterministic way. The serialization is limited to directly overlapping pixels, so performance remains unchanged for the rest of the code.

    Since the extensions were new for 4th gen Intel Core processors, they hadn’t been used in a shipping game before, so we set out to learn the very best ways to use these extensions for GRID 2.

    Codemasters was also interested in the power improvements that the 4th gen Intel Core brings to PC gaming. With longer battery life and better stand-by times, the Ultrabook platform makes an even more compelling gaming environment. Historically, Codemasters did not optimize for power efficiency. With GRID 2, they consistently deliver equivalent or better visuals, while using less power. GRID 2 players win, with longer play times on battery.

    Getting it in tune: Foliage and particles needed help

    Codemasters wanted GRID 2’s graphics to shine on Intel systems, but we had some challenges. To make the game as realistic as possible, we used a particle system for smoke and dust effects from the tires. The tire smoke originally cast a simple shadow on the track, but the smoke effect didn’t shadow itself and had no proper lighting. It relied on artist-created fake lighting, baked into the textures. For years, the artists at Codemasters have been asking for more realistic lighting for their particle systems, but the performance implications had always made it prohibitive. We knew there were better options, and the new processor has given them to us.


    Figure 1.
    Smoke particles before optimization


    In addition to the game’s signature city racing circuits, GRID 2 has several tracks that pass through countryside, featuring dense foliage along the track. This foliage needs to combine with complex lighting to make the racing environment’s atmosphere feel realistic and immersive. Foliage needs to use transparency along its edges to appear realistic and avoid pixel shimmer, especially on moving geometry. In order to render transparent geometry correctly, you must render it in a specific order, which can be impractical in complex real-time scenes like those in GRID 2. An alternative is to use Alpha to Coverage, but that requires multisample anti-aliasing (MSAA), which comes with a performance cost and still has artifacts compared to correct alpha blending.

    Figure 2.Foliage before optimization, showing detail on the right


    Existing solutions to these challenges require a discrete graphics card, and often run brutally slow since they are very computationally heavy. Codemasters needed solutions that were as efficient as possible, and Intel delivered.

    PixelSynchronization: Bringing new performance to existing algorithms
    Both self-shadowing of particles and correct foliage rendering have one thing in common, they are problems that require data to be sorted during rendering. Shadows must be sorted with respect to the light source, and the foliage must be sorted relative to the viewer. One solution is to use DirectX 11 and unordered access views (UAVs). Because of limitations with the way atomic operations can be used, however, the algorithms either require unbounded memory or can result in visual artifacts when memory limits are reached. UAVs also can’t guarantee that each frame will access pixels in the same order, so some sorting is required in order to prevent visual artifacts between frames.

    The Intel Iris Graphics extension for pixel synchronization gives graphics programmers new flexibility and control over the way that the 3D rendering pipeline executes pixel shaders. Intel researchers used this capability to design algorithms that solve three long-standing problems in real-time graphics:

    • Order-independent transparency
    • Anti-aliasing of complex scene elements such as hair, leaves, and fences
    • Shadows from transparent effects such as smoke

    Unlike previous approaches, Intel’s algorithms with pixel synchronization use a constant amount of memory, perform well, and are robust enough for game artists to intuitively use them in a wide range of game scenes. Because pixel synchronization also guarantees any changes to the UAV contents are always ordered by primitive, they’re consistent between frames. This means that games can now use order-dependent algorithms. Intel published earlier versions of these algorithms in the graphics literature two to three years ago, but they have not been practical to deploy in-game until the advent of pixel synchronization on 4th gen Intel Core processors. The published algorithms are called adaptive order-independent transparency (AOIT) and adaptive volumetric shadow maps (AVSM).

    Smoke particle shadow and lighting: Using pixel synchronization for AVSM

    The smoke particle effects are central in GRID 2, so this was an obvious place to apply AVSM. With this feature added, the smoke particles realistically cast shadows on themselves and the track. Artists have greater control over how the particles are lit and shadow themselves, so they have great visual impact.

    "The artists working on 'GRID 2' have been requesting this type of effect for years, and prior to this, it wasn't possible to achieve it at a reasonable cost," said Clive Moody, senior executive producer at Codemasters Racing*. "The fact that this capability will be available to millions of consumers on forthcoming 4th generation Intel Core processors is very exciting to us."

    A PC-only particle system showcases this result.


    Figure 3.
    Smoke particles with AVSM, showing self-shadowing


    Because AVSM combines transparent results in a space-efficient way, there is some compression. You might think that AVSM could introduce unacceptable compression errors, but in practice, visual quality is very good. More importantly, the effect is deterministic since the pixel synchronization ensures pixels are committed in the same order on each frame. This avoids problems with shimmering and flickering that can be introduced by related techniques.

    The first implementation of AVSM in GRID 2 used 8 nodes, and performed all lighting calculations on a per-pixel level using the resolution of the current particle system (normally smaller than the actual screen size). Bilinear sampling smoothed out artifacts when viewing a stationary smoke plume in a replay camera. This first implementation was fast enough in game on higher end systems with Iris Pro Graphics, but with cars having multiple emitters (4+ per car) it took 8 ms to create a shadow map and up to 18 ms to resolve for each. This gave a worst-case of about 100 ms per frame for adding AVSM, so improvements were needed if this feature was to be enabled by default.

    The AVSM node itself was improved, so that 4 nodes could be used instead of 8 with no noticeable visual change. On top of that, a major improvement in performance and quality came from adding vertex shader tessellation, with per-vertex lighting. This avoids sampling the AVSM data structures at a more expensive per-pixel level. GRID 2 implements screen space tessellation in the domain shader and then uses faster per-vertex lighting evaluation to sample the shadow map. By using screen space tessellation, we ensure that large particle quads near the front of the screen are broken down into smaller triangles, while small or distant particles are left relatively untouched. The results are nearly identical visually, and performance is improved, especially for the worst-case scenarios such as replaying while focusing on the car doing a wheel spin.

    Once particle self-shadowing was added, it became clear that the individual particles weren’t sorted correctly when drawn on the screen. Originally, the game had sorted particles back-to-front within an emitter, so the transparent particles would render correctly. With multiple emitters per car, however, it was possible for far smoke plumes to be drawn on top of near ones.

    Figure 4. Problem - unsorted smoke particles with AVSM, with far smoke plumes on top of near ones

    This wasn’t a problem before because the original art was uniform. At first, we planned to solve this with pixel synchronization. We created a working version of the AOIT algorithm (described below) to do this, but since the particles are all screen-space-aligned, they can simply be sorted on the CPU instead. This was faster than a pixel synchronization solution, since it used spare performance on the CPU.

    The final piece of the lighting puzzle was to integrate the AVSM shadow system with Beast* lighting from Autodesk. Beast lighting is used to light the rest of the geometry, which means the AVSM shadow map must pick up the recalculated lighting data, so that smoke trails will darken under bridges or pick up light sources around the edge of the track.

    While AVSM still has a run-time cost, after optimizations it was well within the budget for visual impact. The worst-case scenario was sped up almost 4x. Typical performance is about 0.7 ms per shadow cascade with a 0.4 ms resolve stage, using about 200K pixels on a quarter screen render target. AVSM is enabled by default on high presets; the algorithm can also be switched off and on with the Advanced Settings menu on any 4th gen Intel Core processor-based system.

    Foliage transparency: Using pixel synchronization for AOIT

    Codemasters’ racing titles have a long history of attractive outdoors scenery, with the DiRT franchise pushing artistic boundaries creating realistic off-road environments. While GRID 2 doesn’t go off-road, there are still plenty of tracks that show off stunning point-to-point circuits.

    Figure 5. The Great Outdoors, showing off the stunning scenery


    Codemasters wanted their artists’ work to shine. Transparency on the foliage edges is one part of creating a realistic look and feel. Originally, the only way to get soft edges was to use Alpha to Coverage with high levels of MSAA enabled. This ran very slow, and Alpha to Coverage doesn’t provide depth to densely packed trees. Codemasters turned to AOIT to get the transparent edges of the foliage looking their best, while also running faster and improving the look of the dense forest sections. No changes were required to the art pipeline.

    Figure 6. Foliage with AOIT, showing soft edges in the detail on the right


    It took about 5 ms to render the trees in an area of the track with heavy foliage, which was a significant chunk of a frame. When it was first implemented, AOIT pushed that to 11 ms. This approached the time to run MSAA, so this was too long. Optimizations reduced this significantly.

    The initial AOIT implementation used 4 nodes to store the transparency information. It also used a complex compression routine (similar to the one used for AVSM) that took into account the difference in area beneath a visibility graph. Experiments showed that for typical scenes sorted relative to the viewer, a much simpler algorithm could be used since the depth played a smaller part in the visibility decision. Further experiments showed that 2 nodes were enough to store that data. This allowed both color and depth information to be packed into a single 128-bit structure, rather than separate color and depth surfaces. AOIT’s performance was further improved by using a tiled access pattern to swizzle the elements of the UAV data structure, making memory access more cache-friendly. In total, this nearly doubled the performance of AOIT, bringing it down to 2-3 ms on complex foliage heavy scenes and much less on scenes with light foliage.

    While AOIT proved a good solution for the complex foliage, it still presented some issues. Ideally, all transparent objects would get rendered with the same AOIT path. This would have been expensive since some transparent objects like god rays were already alpha-blended to a large part of the screen and rendered with a traditional back-to-front pass. Combining the two techniques initially created draw-order problems, since it’s difficult to combine traditional back-to-front transparency rendering with AOIT.

    We wanted to keep the efficiency of the back-to-front render for objects that could easily be sorted, while gaining the flexibility of using AOIT on complex intersecting geometry. The solution turned out to be fairly elegant. First, render AOIT without resolving to the screen. Then, execute a back-to-front traditional pass of transparent objects. Anywhere a traditionally rendered object interacted with a screen-space pixel from the AOIT pass, that object was added to the AOIT buffer instead of being rendered. Finally, they’re all resolved. This approach works great, as long as the AOIT objects don’t cover a large part of the screen at the same time as a standard object. This approach allowed ground coverage and god-rays to correctly interact with the tree foliage with only a minimal performance impact. In the end, the AOIT became so efficient it was added to other objects that suffered from aliasing, such as the chain link fences. This allowed for thin geometry to fade out into the distance gracefully, rather than becoming noisy and aliased.

    Figure 7. Fences on the left show aliasing in the distance, AOIT improves fences on the right


    At first, AOIT didn’t work right when MSAA was also enabled. AOIT needs to account for pixels rendered at higher sample frequency at triangle edges. It’s not enough to simply add partially covered pixels into the AOIT buffer with a lower alpha value since they won’t blend properly. These pixels have to be handled separately, adding to the time to compute them. Otherwise, they can reinforce each other and give a double darkening around edges. The solution for GRID 2 was to do this partially, to get the right balance between correctness and compute time.

    AOIT is enabled at Medium quality settings and above, and it can be switched off and on with the Advanced Settings menu. GRID 2 uses Medium quality settings by default on all 4th gen Intel Core processors.

    Instant access: Lessons learned
    The 4th generation Intel Core processors brought two new extensions to DX11 graphics. Pixel synchronization was heavily used in GRID 2. What about instant access?

    Instant access provides access to resources in memory shared by the CPU and GPU. Since GRID 2 already used direct memory access on the consoles, at first we assumed it would be easy to also use on the PC. Systems, like particles, ground cover, crowd instance data, and crowd camera flashes, all accessed the vertex data. Instead of giving an immediate speedup, instant access actually introduced stalling in the render pipeline. DirectX was still honoring the buffer usage and would wait to unlock the resource if it was already in flight to the graphics engine.

    We could have added manual double-buffering to work around this, but we realized that the driver was already doing a good job optimizing its usage on the linearly-addressed memory, so we weren’t likely to see a large speedup. As a result, instant access wasn’t used in GRID 2.

    We talked about a few ideas that could have given performance boosts, like using instant access for texture memory. GRID 2 doesn’t stream the track data, and only a small number of videos are uploaded during a race, so we didn’t expect a large gain. After that, we focused our attention on pixel synchronization since we had such obvious benefits from that extension in this game.

    Your game may take advantage of instant access in several ways. Instant access might give faster texture updates from the CPU (working on native tiled formats), since your game will avoid the multiple writes that come when the reordering data for the driver. Or you may find major gains accessing your geometry if you have a lot of static vertex geometry with small subresource updates per frame.

    Try it out, and see!

    Anti-aliasing: Big improvements
    Anti-aliasing helps games look great. Multi-sample anti-aliasing (MSAA) is commonly used and supported by Intel graphics hardware, but it can be expensive to compute. Since GRID 2 has a very high standard for visual quality and run-time performance, we weren’t satisfied with performance trade-offs for enabling MSAA, especially on Ultrabook systems with limited power budgets. Together, Intel and Codemasters incorporated a technique we’ll call conservative morphological AA (CMAA).

    While you should look for full details on CMAA in an upcoming article and sample, we’ll outline the basics. As a post-process AA technique, it’s similar to morphological AA (MLAA) or subpixel morphological AA (SMAA). It runs on the GPU and has been tailored for low bandwidth with about 55-75% the run-time cost of 1xSMAA. CMAA approaches the quality of 2xMSAA for a fraction of the cost. It does have some limited temporal artifacts, but looks slightly better on still images.

    For comparison, at 1600x900 resolution with High quality settings, enabling 2xMSAA adds 5.0 ms to the frame, but CMAA adds only 1.5 ms to the frame (at a frame rate of 38.5 FPS). CMAA is a great alternative for gamers who want a nicely anti-aliased look but don’t like the performance of MSAA.

    Figure 8. Original garage on the left shows some aliasing, better with CMAA applied on the right.


    Because CMAA is a post-processing technique, it also works well in conjunction with AOIT, without suffering from the sampling frequency issues discussed above.

    SSAO: A study in contrasts
    GRID 2 contains screen-space ambient occlusion (SSAO) code that runs great on some hardware, but didn’t run as well as we’d like on Intel® hardware. There are different SSAO techniques, and GRID 2 originally used high definition ambient occlusion (HDAO). When we first studied it, it took 15-20% of the frame, which was far too much.

    The original SSAO algorithm uses compute shaders, but CS algorithms can sometimes be tricky to optimize for all variations of hardware. We worked together to create a pixel shader implementation of SSAO that performs better in more cases.

    Figure 9. SSAO turned off on the left, SSAO turned on and running in a pixel shader on the right.


    The CS implementation relies heavily on texture reads/writes. The PS implementation uses more computation than texture reads/writes, so it doesn’t use as much memory bandwidth as the CS implementation. As a result, the PS version of SSAO runs faster on all hardware we tested and runs significantly faster on Intel graphics hardware. While the new version is the default, you may choose either SSAO implementation from the configuration options.

    Looks great, less battery: Minding the power gap
    More gamers than ever play on the go. This poses some special challenges for game developers. To help players keep an eye on their charge while playing, GRID 2 displays a battery meter on-screen. Codemasters used the Intel® Laptop and Netbook Gaming Technology Development Kit to check the platform’s current power level and estimated remaining battery time. When you’re running on battery power, that information is discreetly shown as a battery meter in the corner of the screen.

    When playing on battery, the CPU and GPU workloads each contribute to the overall power use. This makes it a careful balancing act to optimize for power since changes to one area may affect the power use of the other.

    First, we optimized any areas where extra work was being done on the CPU that didn’t affect the GPU.  For example, there were some routines that converted back and forth between 16-bit floats and 32-bit floats. Those routines used simple reference code, but after study, we replaced them with a different version that ran much faster.

    Another CPU power optimization came from the original use of spin locks for thread synchronization. This is very power inefficient; it keeps one CPU core running at full frequency, so the CPU’s power management features cannot reduce the CPU frequency to save power. It can also prevent the operating system’s thread scheduler from making the best thread assignment. Several parallel job systems were rewritten, including the CPU-side particle code. They were changed to reduce the amount of cross-thread synchronization.

    One of the best power optimizations that can be done on a mobile platform is to lock the frame rate to a fixed interval. This lets both the CPU and GPU enter a lower power state between frames. Since GRID 2 was already optimized around a target of 30 FPS on default settings, it wouldn’t have had much effect if we had simply set a 30 FPS frame rate cap. Instead, there’s a special mode added to the front-end options. If power saving is enabled, the game will reduce some visual quality settings when the user is running on battery. Since none of the setting changes require a mode change, they can happen seamlessly during play. These changes raise the average frame rate above 30 FPS, so a 30 FPS frame rate cap is now effective at saving power and prolonging game play on battery.

    Finally, the game’s built-in benchmark now uses power information. When profiling the game over a single run, GRID 2 logs power and battery information as the benchmark loops. If you study these results over time, you can see how power-efficient your current settings are on your benchmark system.

    Conclusions


    Working together, Intel and Codemasters found ways to deliver a fantastic game that looks and runs great on Intel’s latest platforms.

    Now that they can be built on top of pixel synchronization, AVSM and AOIT bring new levels of visual impact along with great performance. Together, they enrich the game environment and give a greater level of immersion than ever before.

    The addition of CMAA brings a new option for high-performance visual quality. Moving SSAO to a pixel shader helps the game run faster. After optimizing usage of the DirectX API with more efficient state caching, optimizing float conversion routines, removing spin locks, and automatically adjusting quality settings and capping the frame rate, the game gets the most out of your battery. GRID 2 also helps gamers keep track of their battery power when they’re playing on the go.

    Adding those together, GRID 2 looks and runs great on Intel’s latest platforms. Consider the same changes in your game!

    References


    Latest AVSM paper and sample: http://software.intel.com/en-us/blogs/2013/03/27/adaptive-volumetric-shadow-maps
    Original AVSM paper and sample:http://software.intel.com/en-us/articles/adaptive-volumetric-shadow-maps
    AOIT paper and sample: http://software.intel.com/en-us/articles/adaptive-transparency
    Laptop and Netbook Gaming TDK Release 2.1: http://software.intel.com/en-us/articles/intel-laptop-gaming-technology-development-kit
    4th Generation Intel® Core™ Processor Graphics Developer Guide:http://software.intel.com/en-us/articles/intel-graphics-developers-guides

    About the author


    Paul Lindberg is a Senior Software Engineer in Developer Relations at Intel. He helps game developers all over the world to ship kick-ass games and other apps that shine on Intel platforms.

    Intel, the Intel logo, Core, and Ultrabook are trademarks of Intel Corporation in the U.S. and/or other countries.
    Copyright © 2013 Intel Corporation. All rights reserved.
    *Other names and brands may be claimed as the property of others.

    UltraDynamo Case Study - App Innovation Contest Entertainment Category Winner

    $
    0
    0

    By William Van Winkle

    Downloads


    Case Study Ultra Dynamo [PDF 662.43KB]

    From Top Gear to Top Winner


    By day, David Auld is an Offshore Installation Manager (OIM) in the oil and gas industry. But when the production platform is humming along without him, Auld indulges his hobby as a devout “petrol-head” (car enthusiast). He also finds time to feed a passion for programming, which led to him earning a 2012 BSC Honours Degree in Computing. Surprisingly, these three facets of the native Scotsman all converged when Auld won the Entertainment Category of the Intel® App Innovation Contest.


    With UltraDynamo, art may have copied life when developer Dave Auld took inspiration from his own Mercedes console.(Source:http://www.mbusa.com/vcm/MB/DigitalAssets/Vehicles/ClassLanding/2013/C/Coupe/Gallery/2013-C-Class-Coupe-Gallery-009_wr.jpg)

    Auld has been a CodeProject member for nearly 10 years. He also takes pride in owning a Mercedes-Benz C63 AMG sedan, the latest in his long line of personal sports cars and one particularly blessed with a graceful and classic dash console. Perhaps this was in the back of Auld’s mind when he noticed CodeProject advertising the Intel App Innovation Contest. He read through several of the proposals and thought, “There must be something I can come up with...” From there, all it took was watching a Top Gear rerun featuring the Bugatti Veyron and its horsepower indicator. “How did Bugatti do that?” he wondered. In working toward an answer, Auld stumbled further into a series of questions and revelations that led to his award-winning success only weeks later.

    The UltraDynamo App: Form and Function


    UltraDynamo is a Microsoft Windows* Desktop application that uses many of the Ultrabook™ device platform’s sensors to provide motor sports enthusiasts with performance data about their vehicles. As shown in the screen capture below, UltraDynamo offers a range of readouts, including x-, y-, and z-axis accelerometers, a compass rose, a speedometer, inclinometers, and gyrometers. These might be presented as charts, pictures, numeric readouts, and so on. The data for each of these springs from various Ultrabook device sensors, including the accelerometer, gyrometer, inclinometer, and a Global Positioning System (GPS) sensor. In short, UltraDynamo presents a configurable on-screen dashboard.


    Without real sensor data on hand, Auld saved ample development time by simulating input values. This screen capture shows a typical simulation dialog box and its effect on the main dashboard.

    In looking at the application, Auld’s priority was clear: Keep the front end as clean and simple as possible to minimize key entry by the user. (Obviously, requiring manual interaction while the user is behind the wheel would be undesirable.) In the same vein, he understood that different users would come to the app with different needs and priorities. The UI should reflect that. Thus he broke out individual readout functions into separate window elements that users could reposition and resize as desired.

    For Auld, this UI simplicity should also be reflected in the program’s responsiveness. “Usability is key,” he said. “Users want that reward: when they click, the app does what’s expected. That’s what will keep them wanting to use the program.”


    The UltraDynamo app relies on a flexible dashboard interface featuring a range of gauges, including compass heading, acceleration, speed, and horsepower.

    Auld developed UltraDynamo on a pair of PCs running Windows 8 Pro, one desktop and one laptop. Neither had any sensors but both had copies of Microsoft Visual Studio* 2012 Pro. Once his concept application was accepted, CodeProject contacted Auld to confirm that he felt he could provide a working application for the competition. When both agreed that it was feasible, CodeProject sent Auld a sensor-equipped Ultrabook device with Windows 8 Pro and Visual Studio 2012 Pro. Auld noted that in order to keep all of his development systems “in check,” he used VisualVSN Server* as a source code library. This library is hosted by a cloud provider on a Windows 2008 R2 virtual machine.

    “I used ankhSVN plugin for Visual Studio,” he added. “It was a simple case of checking in any code changes on one system following any edits, then updating the source to the latest version on the others. This worked well as a way to manage the source from a multi-system single developer point of view.”

    Challenges Addressed During Development


    One of the first obstacles Auld had to conquer was a lack of resources offering suggestions on how to handle sensor data. For example, after getting a temperature value, what does the programmer do with it? Ultrabook devices and their many sensors are relatively new to the market, so there isn’t a large bed of third-party examples and advice to follow beyond Intel’s own Ultrabook™ and Tablet Windows* 8 Sensor Development Guide and the Windows 8 code samples Intel offers. Auld had to figure out many of the answers on his own.

    His first such problem was the original program interface. It was, as he put it, “just a bunch of random numbers on the screen.” He needed gauge controls to mimic actual dashboard readouts. At first, he tried to design these on his own, but it soon became clear that there wasn’t enough time to build what he wanted from scratch. He searched the Web, cast about the CodeProject site, and finally unearthed a license-free dial control called Aqua Gauge, written by Ambalavanar Thirugnanam. This dropped easily into Auld’s code and became the backbone on which the other UltraDynamo controls were built.

    Auld also found that frequent accelerometer sensor updates were causing an event flood, which in turn stalled the interface graphics. Through trial and error, he worked to change the time intervals for eventing data. Finally, he got the display working and stable, although he hopes to return to it for further tweaking. Rather than poll data on a fixed interval, Auld wants to see the app work on a more intelligent feedback loop wherein the app doesn’t request more data until the graphics system is ready to handle it.


    UltraDynamo’s Configuration tab offers a range of input frequency settings for the Ultrabook™ device’s various sensors.

    As mentioned earlier, Auld’s day job experience played into his UltraDynamo development. After having his proposal accepted for the Intel contest, Auld had to wait to receive his Ultrabook device, and during this time his job required him to go offshore for many days. Fortunately, his background as a control systems manager found him frequently building simulations so that the graphics could be tested without requiring the production plant’s systems to be available. The same methods applied here. He wrote the graphics first, created a dummy set of data, and worried about the sensors later.

    “It was simple,” said Auld. “Put a bunch of sliders onto a form and group them into the relative component, whether it was accelerometers, gyrometers, or whatever. That allowed me to manipulate the graphic as part of my testing without actually having hardware sensors available to me. That was a significant benefit. Otherwise, I would have had to spend several days writing code, then get the Ultrabook from Intel and find that nothing worked. I would have lost a huge amount of time. Let's program for the graphics and write it in such a way that I can just plug in the sensors at a later date and, in theory, it should all work nicely.”

    Auld had to take some educated guesses on the data boundaries the Ultrabook device sensors would generate, but once he finally received the device and got started, it all worked fine. Fortunately, his long career and ample experience with touch and sensor development helped him to steer clear of any major issues in these areas.

    UltraDynamo’s last major hitch revolved around the MSI installer required for submission to the Intel AppUp® center. Originally, Auld intended to generate the package from the InstallShield* Lite tool that comes bundled with Microsoft Visual Studio 2012. However, no amount of banging his head against the application helped him understand how to generate an MSI package directly. No matter what he tried, all he could get from the program was an .EXE installer, which the Intel AppUp® center wouldn’t accept. Finally, Auld did find a way to “double-install” into an MSI package, but the Intel AppUp center wouldn’t accept that either. Apparently, examination by Intel techs in a test environment revealed that “the shortcuts that the application installed weren’t announced shortcuts.”

    “To this day, I haven’t got a Scooby what that means,” admitted Auld.

    Fortunately, Intel came to Auld’s rescue. Tech support staff sent him an alpha version of a tool they used internally for app store packaging that relied on WIX* as its underlying toolset for generating installer packages.

    “After working out how the Intel-provided app ran a couple of the WIX underlying commands to generate the MSI package, I took the XML file that the tool had created and used it as a foundation. I tweaked the internal XML nodes, got my shortcuts displayed on the screen, and then manually ran the WIX underlying commands to generate the MSI package. This then went through verification at Intel without any issue.”

    All told, Auld spent about four weeks designing UltraDynamo while working a full-time job. This was broken up with all-consuming work on his production platform, waiting for verification from Intel on different code fixes and so forth. It was a tense, utterly time-constrained process, but it forced him to focus on what was essential for meeting milestone deadlines and to find solutions within his limitations. The lessons here for a part-time, lone programmer were significant.


    Simple but effective, this graph shows UltraDynamo’s graphing of real-time data from X, Y, and Z axis accelerometer sensor inputs.

    Lessons Learned, Advice Given


    UltraDynamo went on to win the Intel App Innovation Contest’s Entertainment category, but that doesn’t mean the application is finished. Auld said he had to leave many ideas on the drawing board because of time constraints, and the UI that did emerge was largely tailored to his own interests. He would like to see the app develop “workspaces” in which users could customize their dashboards and save them like profiles. He would also like to find more professional-looking gauges before commercializing the software.

    UltraDynamo’s development was much like any other app development, fraught with its own complications, delays, and breakthroughs. “Maybe it's frustrating,” said Auld, “but it does help you to think for yourself, and to try things and dig deeper. In the process, you become proficient.”

    He encourages other developers to be willing to learn, experiment, and fail. As an apprentice, when learning the systems on a new platform, Auld had to figure out all of the plumbing and parts and systems on his own. Supervisors would steer and make sure he “didn’t do anything stupid,” but it was an environment for the inquisitive, adventurous, and self-motivated.

    Auld says that such a mindset is becoming increasingly rare in a time when young programmers would rather be spoon-fed code than take five minutes to write something and see if it works. On CodeProject, Auld tries to point people in a direction and encourage them to reverse-engineer what other people have done instead of saying, “There’s your 25 lines of code. Get on with it.” Every project involves a learning and research phase. Expect it, don’t look for shortcuts, and keep the results of these learning processes in a personal code library.

    Roll with what you’ve got. Given his time constraints, Auld had to use some generic car images as part of the interface’s readouts. He hopes to expand this image set in the future.

    Even before starting to code, Auld recommends that developers write out the app they have in mind as a narrative. Approach it as a technical article. By creating at least a plan in bullet-point form, it forces the developer to break down the application’s structure and functionality, which in turn helps offer more guidance in the application’s development. Writing an application as an article will force the developer to think about what he or she is trying to convey to the end user and the ways in which those priorities can be best communicated.

    Finally, Auld encourages developers to “just dive in.” Try and fail. Don’t be afraid to ask questions and get involved on sites such as CodeProject. Auld admits to being little more than a silent trawler for his first seven or eight years on the site. Armed with enough years of slow but sure learning, he was finally ready to become more active and give back into the community. He adds, “That is an important thing for people to do. Don’t just take all the time, but give back, as well.”

    Resources


    Auld provides extensive details on the processes and tools he used in constructing UltraDynamo in his five-part CodeProject article. This shows his path from Visual Studio setup through code signing and packaging. Along the way, he also investigated online coding tool reseller ComponentSource and, as noted earlier, resources found at CodeProject ultimately formed the foundation for UltraDynamo’s interface.

    Auld stresses that he couldn’t have won his contest category without help from Intel’s forums, Intel tech support, and, most of all, the developer community. “Without the inspiration and help of notable gurus on CodeProject like Pete O’Hanlon, who helped manage the sensors, this wouldn’t have happened. The code I had written was garbage in comparison. Listening to other people is so important.”

     

     

     

    Portions of this document are used with permission and copyright 2012 by CodeProject. Intel does not make any representations or warranties whatsoever regarding quality, reliability, functionality, or compatibility of third-party vendors and their devices. For optimization information, see http://software.intel.com/en-us/articles/optimization-notice/. All products, dates, and plans are based on current expectations and subject to change without notice. Intel, the Intel logo, Intel AppUp, and Ultrabook are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. Copyright © 2013. Intel Corporation. All rights reserved

    Viewing all 533 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>