Quantcast
Channel: Intel Developer Zone Articles
Viewing all 533 articles
Browse latest View live

Windows* 8 Desktop App - Connected Standby Whitepaper

$
0
0

Download Article

Windows* 8 Desktop App - Connected Standby Whitepaper (PDF)

Related Content

Windows* 8 Desktop App - Connected Standby abstract and source code
Windows* 8 Desktop and Store Code Samples

Introduction

Connected Standby is a new feature introduced by Microsoft in Windows 8* for SOC-based platforms. The use case on the tablet/mobile systems is similar to that on phones like Instant ON and Always Connected. Intel® dual-core Atom™ Z2760 (code name CloverTrail) is the first Intel platform to support Connected Standby. Connected Standby is critical to achieve the lower power targets for many devices and be able to demonstrate power performance leadership. Interesting applications can be developed for Connected Standby to further showcase the platform capabilities like push email, multiplayer games, etc.

In this sample application, we showcase the use of Connected Standby and demonstrate the use of the API to register the background task with the timer for periodic execution in the background, so when the system enters standby, the task executes and generates an execution log. It is a simple interface and application for any ISV to use inside their apps.

How it works

Connected Standby is primarily designed for very low power consumption and achieving the battery targets. It is part of the Microsoft certification, and devices need to conform to the requirement set by Microsoft for Connected Standby. Devices enter Connected Standby when the “on” button is pushed or after idle timeout. Based on current testing, the power is < 100 mW1 .This is the requirement set by Microsoft. Instant On is about < 300ms1 from button press to display on. Overall, the platform is mostly asleep; only select apps execute occasionally. Figure 1 shown below represents the state representation in form of a flowchart, designed to illustrate the system transitions. Figure 2 shown below shows the more detailed view of the transitions and the events associated with each transition.

Figure 1: State Transition

Figure 2: Connected Standby – Flow of actions

Applications Conforming to Connected Standby

Microsoft has designed the API and all the various interfaces for applications to use the Connected Standby feature and be power efficient even when applications are running. Windows* UI and Store apps are designed to be power efficient from the ground up. Windows Store apps (using the new Windows UI features) are suspended when the device enters Connected Standby. Microsoft has created the concept of background tasks that run while the device is in standby. Windows Store Apps can include background tasks that run in connected standby. The included background task/tasks run even if the corresponding application is suspended or has exited. To save power, Windows 8 imposes tight restrictions on background tasks. Figure 3 illustrates the concept.

  • Background tasks either run:
    • Periodically (less than once every 15 min)
    • When triggered by an event : Network becomes available, network event, push notification
  • Background tasks have limited CPU time to run:
    • 7 “lock screen” apps get 2 sec every 15 mins
    • Non “lock screen” apps get 1 sec every 2 hrs
  • Limited network bandwidth:
    • Lock screen apps get 4.69 MB every 15 min (daily max of 450 MB)
    • Non Lock screen apps get 6.25 MB every 2 hrs (daily max of 75 MB).

Figure 3: Application in Connected Standby

Writing Background Tasks

Trigger Type

Description

Lock screen app only

Time Trigger

Time event

Yes

Control Channel trigger

Data on open TCP channel

Yes

Push notification trigger

Data from Windows* Push Service

Yes

System trigger

  • Network up
  • Network down
  • Network status

Various network and system events

No

Maintenance trigger

Same time trigger, except fires only when device is plugged in

No

 

Registering a timed task

Below is a code snippet based on the Microsoft classes and APIs that allow developers to register a timed task with the application builder class. This can be used by any Windows 8 application to register the timed task with the trigger value specified. The trigger value is taken in seconds. You can refer to the MSDN for further information regarding the API and the classes (http://msdn.microsoft.com/en-us/library/windows/apps/br224847.aspx)

using namespace Windows::ApplicationModel::Background
   
//Associate an app with a trigger
    auto builder = ref new BackgroundTaskBuilder();
    builder->Name = “SampleBackgroundTask”;
    builder->TaskEntryPoint = “Tasks.SampleBackgroundTask”;
//Create a time Trigger
    IBackgroundTrigger^ trigger = new TimeTrigger(15, false);
    builder->SetTrigger(trigger);
   
//Register task with OS
    builder->Register();

Background Task Execution

The following code snippet shows instantiating the background task, which then executes when the event is triggered. This is the same task that was created earlier and registered with the builder class. The Sample Background task’s “run” method can be populated with the required execution flow.

using namespace Windows::ApplicationModel::Background;
//Entry Point for Background Execution.
namespace Tasks
{
    public ref class SampleBackgroundTask sealed:public IBackgroundTask
    {
       ...
      virtual void Run(IBackgroundTaskInstance^ taskInstance) {
          // Code to be executed in the background
      }
      ...
    };
}

The task generation is part of the samplebackground.xaml.cpp file, and the timetriggeredtask.xaml.cpp has the code for the timed task.

Summary

The sample code showings a background registered task and its usage and flow. When the system enters Connected Standby, the background task will generate messages that are sent to the lock screen to show as notifications, thus demonstrating the capability of Connected Standby.

  • ultrabook
  • Windows* 8
  • Windows* Store application
  • SOC-based platforms
  • Connected Standby
  • Developers
  • Microsoft Windows* 8
  • Ultrabook™
  • Microsoft Windows* 8 Desktop
  • URL

  • Windows* 8 Desktop App - Low Power Audio Playback

    $
    0
    0

    Abstract

    One of the use cases for tablets with Intel® Atom™ processors and Microsoft Windows 8* is low-power audio playback. This capability allows users to continue listening to music after the device enters a low-power state commonly referred to as Connected Standby. Connected Standby is an Always On Always Connected scenario implemented in Microsoft Windows and manifest through the new Intel Atom S0ix low-power states.

    We’ll show you how to write an HTML5 and JavaScript* application that sets up a simple audio player to take advantage of Connected Standby audio playback.

    Article

    Windows* 8 Desktop App - Low Power Audio Playback Whitepaper

    Download Source Code

    LowPowerAudio.zip

    License Intel sample sources are provided to users under the Intel Sample Source Code License Agreement

  • WindowsCodeSample
  • ultrabook Windows* 8
  • Windows* Store application
  • Connected Standby
  • is low-power audio
  • html5
  • avaScript
  • application
  • Developers
  • Microsoft Windows* 8
  • Ultrabook™
  • Microsoft Windows* 8 Desktop
  • URL
  • Windows* 8 Desktop App - Low Power Audio Playback Whitepaper

    $
    0
    0

    Download Article

    Windows* 8 Desktop App - Low Power Audio Playback (PDF)

    Related Content

    Windows* 8 Desktop App - Low Power Audio Playback abstract and source code
    Windows* 8 Desktop and Store Code Samples

    Introduction

    One of the use cases for tablets with Intel® Atom™ processors and Microsoft Windows 8* is low-power audio playback. This capability allows users to continue listening to music after the device enters a low-power state commonly referred to as Connected Standby. Connected Standby is an Always On Always Connected scenario implemented in Microsoft Windows and manifest through the new Intel Atom S0ix low-power states. Using this state, devices can save dramatically on battery life while still allowing users to listen to music.

    We’ll show you how to write an HTML5 and JavaScript* application that sets up a simple audio player to take advantage of Connected Standby audio playback.

    Overview of Connected Standby

    (as described by Priya Vaidya)

    Connected Standby is primarily designed for very low-power consumption and achieving the battery targets. It is part of the Microsoft certification, and devices need to conform to the requirement set by Microsoft for Connected Standby. A device enters Connected Standby when the “on” button is pushed or after idle timeout. Based on current testing, the power is < 100 mW1 , the Intel Atom processor Z2760 (codenamed Clover Trail) is at ~45 mW (~ 30 day of CS). This is the requirement set by Microsoft. Instant on is approximately < 300 ms1 from button press to display on. Overall system-wise, the platform is mostly asleep, only select apps execute occasionally.

    Connected Standby – Flow of actions

    Coding the HTML5 Application

    One of the easiest ways to code for low-power audio playback on Windows 8-based tablets with Intel Atom processors is to use the new HTML5 audio tag.  By default the audio tag will NOT continue playback during the low-power Connected Standby state. To specify audio playback during this state the HTML tag needs to include an audio category attribute. The attribute name is msAudioCategory, and it needs to be set to the value “BackgroundCapableMedia.” This will specify to the underlying framework and runtime to configure everything necessary for Connected Standby playback.

    One last thing that needs to be done at the application level for low-power audio to correctly function is to modify the application’s manifest. A Declaration needs to be added for “Background Tasks.” Within the declaration, a property also needs to be specified for “audio.” If using Visual Studio*, this can be done simply by opening the package.appxmanifest, choosing the Declarations tab, adding a “Background Tasks” declaration from Available Declarations drop-down list, and finally check the “Audio” Task Type in the Properties section.

    This method of implementation can be seen in the Low-Power Sample app, in the files default.html and default.js. The package.appxmanifest can also be checked for the correct Declaration setup. The sample also used the ”Control” attribute to automatically add simple playback controls.

    Hardware-Accelerated Audio File Playback

    Related to low-power audio playback on Intel Atom platforms is hardware accelerated audio decoding. On the Clover Trail platform, a number of hardware-supported audio formats for both decoding and encoding are supported.

    Audio

    Encode

    Decode

    MP3

    H/W

    H/W

    AAC-LC

    H/W

    H/W

    PCM(Wave)

    H/W

    H/W

    Vorbis

     

    H/W

    HE-AAC

     

    H/W

    WMA Pro 10/9

     

    H/W

    Dolby Digital

     

    H/W

    MPEG-1

     

    H/W

    MIDI

     

    H/W

    G.729AB/711/723.1

    H/W

    H/W

    AMR-NB/WB

    H/W

    H/W

    iLBC

    H/W

    H/W

    Post proc/ echo

    H/W

    H/W

    Summary

    Low-power audio playback adds a very valuable and useful scenario to mobile devices allowing them to act as traditional music players while significantly extending battery life. Using the new HTML5 audio tag and JavaScript, it is easy to add this functionality to your application, either statically or dynamically.

  • ultrabook
  • Windows* 8
  • Windows* Store application
  • SOC-based platforms
  • Connected Standby
  • Developers
  • Microsoft Windows* 8
  • Ultrabook™
  • Microsoft Windows* 8 Desktop
  • URL
  • Deeper Levels of Security with Intel® Identity Protection Technology - White paper

    $
    0
    0

    White Paper: Deeper Levels of Security with Intel® Identity Protection Technology

    With the latest release in 2012 of Intel® Identity Protection Technology (Intel® IPT) introduced additional capabilities beyond the initial one-time password (OTP) solutions embedded in silicon and provided an extension of secure computing to a broader range of consumer,  enterprise and business applications.

    The new Intel IPT capabilities included:

    • Public key infrastructure (PKI) to protect access to business data and bolster communications by means of embedded certificates over a virtual private network (VPN)
    • Protected transaction display (PTD) to minimize risks when entering PINs and passcodes
    • Near-field communication (NFC) to facilitate simple and secure sales transactions over the Internet

    This technical white paper examines the architecture and technology that is the foundation of Intel Identity Protection Technology and the ways in which solutions built around an embedded security token model help to minimize fraud, protect online accounts and to substantially reduce the risk of identity theft.  Read this detailed technical white paper on Intel Identity Protection Technology to understand how this technology is offering strong, simple and secure protection for individuals, websites and business.  Download the full white paper.

  • identity protection technology
  • OTP
  • PKI
  • NFC
  • security
  • token
  • One-Time Password
  • Business Client
  • Ultrabook™
  • Intel® vPro™ Technology
  • Security
  • PDF
  • Detecting Slate/Clamshell Mode & Screen Orientation in Convertible PC

    $
    0
    0

    Downloads

    Download Detecting Slate/Clamshell Mode & Screen Orientation in Convertible PC [PDF 574KB]
    Download dockingdemo2.zip [ZIP 35KB]

    Executive Summary

    This project demonstrates how to detect slate vs. clamshell mode as well as simple orientation detection on Windows* 8 desktop mode. The application is a tray application in the notification area and is based on win32 and ATL. The tray application also works when the machine is running in New Windows 8 UI mode. It uses windows message and sensor API notification mechanism and doesn’t need polling. However, the app requires appropriate device drivers and it was found that many current OEM platforms don’t have the necessary drivers for slate / clamshell mode detection. Simple orientation sensor works on all the tested platforms.

    System Requirements

    System requirements for slate / clamshell mode detection are as follows;

    1. Slate / clamshell mode indicator device driver (Compatible ID PNP0C60.)
    2. Go to Device Manager -> Human Interface Devices -> GPIO Buttons Driver -> Details -> Compatible Ids. If you find PNP0C60, that’s the driver. Without this driver, slate mode detection doesn’t work.

    System requirements for orientation;

    1. Simple Device Orientation Sensor.
    2. Present in all tested convertible PCs.

    Application Overview

    • Compile and run the application, and it will create a tray icon. For testing purpose, customize "Notification Area Icons" so that DockingDemo.exe’s behavior is to "Show icon and notifications" in the lower right corner of the screen.
    • Move the mouse over the icon and it shows the current status.

    • Right click on the icon for further menus – About, Save Log…, and Exit. Save Log will let you save all the events to a specified file. When you save the events to the log, it clears the events in the memory.
    • Rotate and switch back and forth between the slate / clamshell mode or rotate the platform. The tray icon will pop up a balloon to notify the change.

    Slate / Clamshell Mode Detection

    OS broadcasts WM_SETTINGCHANGE message to the windows when it detects slate mode change with the string "ConvertibleSlateMode" in lParam. WinProc in DockingDemo.cpp handles this message. The API to query the actual status is GetSystemMetrics. This method works when the system is running New Windows 8 UI mode.

    BOOL bSlateMode = (GetSystemMetrics(SM_CONVERTIBLESLATEMODE) == 0);

    Screen Orientation Detection

    In desktop environment, OS broadcasts WM_DISPLAYCHANGE message to the windows when it detects orientation changes. lParam’s low word is the width and high word is the height of the new orientation.

    There are two problems with this approach.

    • This approach only detects landscape and portrait mode. There is no distinction between landscape vs. landscape flipped and portrait vs. portrait flipped.
    • WM_DISPLAYCHANGE simply doesn’t work when it is running in New Windows 8 UI mode.

    Fortunately, Microsoft* provides COM interfaces to directly access the various sensors and there are various white papers about how to use it. Some of the references are listed here.

    In this project, SimpleOrientationSensor class implements the infrastructure to access the orientation sensor, and OrientationEvents class is sub-classed from ISensorEvents to register the callbacks for the orientation change events. Since the Sensor APIs use callback mechanism, the user application doesn’t have to poll the events. This approach works when the system is running in New Windows 8 UI mode.

    The relationship between slate mode and rotation needs to be carefully thought out. Rotation may be enabled / disabled automatically depending on the slate / clamshell mode. To ensure the proper behavior, a combination of GetAutoRotationState API and rotation sensor is used for this sample, i.e., discard rotation event notification when autorotation is NOT enabled. In that case, use EnumDisplaySettings to get the current orientation in NotifyOrientationChange function as shown in the code snippet below.

    Intel, the Intel logo and Xeon are trademarks of Intel Corporation in the U.S. and other countries.

    *Other names and brands may be claimed as the property of others

    Copyright© 2013 Intel Corporation. All rights reserved.

    License
    Intel sample sources are provided to users under the Intel Sample Source Code License Agreement.

  • ultrabook
  • Windows 8*
  • desktop
  • Tablet
  • applications
  • slate mode
  • clamshell mode
  • orientation detection
  • Developers
  • Microsoft Windows* 8
  • Ultrabook™
  • Intermediate
  • Microsoft Windows* 8 Desktop
  • Microsoft Windows* 8 Style UI
  • Protected Attachments: 

  • URL
  • Case Study: 4tiitoo Constructs a Modern User Interface with Voice, Gesture, and Eye Tracking Input

    $
    0
    0

    Download Article

    Download Case Study: 4tiitoo Constructs a Modern User Interface with Voice, Gesture, and Eye Tracking Input [PDF 1.1MB]

    By Karen Marcus

    In 2012, Intel held the Europe, Middle East, and Africa-based Ultrabook™ Experience Software Challenge to encourage developer invention and imagination in enabling a more spontaneous user experience with Ultrabook devices. Thirty participants from 11 countries competed for 6 weeks to develop original applications that integrated touch, gesture, and voice functionality. Judging criteria were as follows:

    • Functionality. Does the application work quickly and effectively, without any issues?
    • Creativity. Does the application represent an innovative usage model?
    • Commercial potential. How useful is the application for the mass market?
    • Design. Is the application simple to understand and easy to use?
    • Fun factor. How positive is the emotional response to the application?
    • Stability. Is the application fast and simple, without glitches?

    The software company 4tiitoo (pronounced “forty-two”), as a participant in the Ultrabook Experience Software Challenge, designed the winning app, NUIA* Imagine, a photo organizing and viewing application running on the Windows* 8 desktop.

    With a focus on natural user experience, the development team sought to use a variety of input modalities that offer a more comfortable computing experience than the traditional keyboard and mouse. Although the functionality of the app is familiar, the way the user interacts with it is unusual, with touch, gesture, and voice input as well as an eye-tracking component. The result is a modern user interface (UI) that allows multiple types of input for the same commands. For example, a hand swipe to the right, pressing the right arrow key, or saying, “Next,” all result in the app showing the image to the right of the current image on the screen.

    Product

    The idea for NUIA Imagine came from the 4tiitoo team and the problems they faced in organizing their photos from vacations and other events. The task was neither pleasurable nor efficient for them, and they decided to address the dilemma with an application that provides more flexible, yet intuitive functionality. The team developed NUIA Imagine specifically for the Ultrabook Experience Software Challenge.

    NUIA Imagine enables users to organize images into albums. The application reads all images within specified directories and displays them as thumbnails in the Miniature Preview in the lower-right area of the screen. The Workbench, in the center, displays thumbnails in a larger resolution, including a center image, which users can add to the active album. Any number of albums can be created, and users can switch among them using the Album Overview in the upper-left area of the screen. The Album Preview, in the upper-right area of the screen, displays the selected album (see Figure 1).


     Figure 1.NUIA Imagine interface

    NUIA Imagine is unique not because of what users can do with it but for how they can do it. Silke Eineder, marketing manager at 4tiitoo, explains: “Users can organize or enjoy looking at their photos in the most comfortable way. Instead of sitting in front of a computer for hours in an uncomfortable position, bound to a mouse and keyboard, NUIA Imagine makes use of Ultrabook sensors to allow users to organize photos from a relaxed position. This is possible because they can work hands-free, as NUIA Imagine supports hand gestures and voice commands.”

    So, users can literally sit back and have a cup of coffee while using the application. They can simply go through their photos, delete the ones that did not turn out well, organize the others into separate folders, or view the photos just by speaking various commands such as, “add,” “delete,” “next,” “previous,” and “maximize” (see Figure 2). Keystrokes and touch gestures can alternately be used to perform the same functions.


     Figure 2.NUIA Imagine rotation menu

    Development Process for Windows* 8

    The development team decided to create NUIA Imagine as a Windows 8 Desktop application so it could also be used in older Windows versions and be easily ported to other operating systems. To ensure operation on other operating systems, says Eineder, “Except for touch, we did not use any key features of Windows 8. We did this to support other versions of Windows and to make sure we didn’t have problems porting the application to non-Microsoft operating systems.”

    Another challenge was that the speech recognition software included with Windows 8 was less accurate than the software the team had used in the past. So the solution they found was the Nuance VoCon speech engine, which demonstrated much better recognition performance. Eineder explains: “The software needs a grammar file with the commands. On any speech input, it delivers a result with the top three recognitions and the respective detection rates. The basic design of the application and the used parts of the NUIA software development kit (SDK) libraries define all actions (swipe, add, delete, etc.) completely agnostic from any trigger event. So, every modality, such as speech, only needs to send a trigger event (the top recognized command). Everything else is already implemented in the lower stack. The speech recognition engine, which is used in the Intel® Perceptual Computing SDK, is similar to this engine.”

    Development Tools

    The team used the multimodal NUIA SDK, the Qt Creator, and many sheets of paper to develop NUIA Imagine.

    NUIA SDK

    The NUIA Imagine application was developed based on the NUIA SDK and middleware, which Stephan Odörfer, co-founder and CTO, describes: “The NUIA SDK is a hardware-agnostic infrastructure for connecting all kinds of input modalities to standard operating system interactions. That means there’s an abstraction layer inside that sends out a command, like, ‘Next.’ This command can be triggered by any modality in use by the computer. For example, a ‘Next’ command could be input using the right cursor key, a swipe gesture, or voice input. The design of the software is completely agnostic with these modalities. So, a programmer could, without any problem, add another modality, which then already controls the actions of the software, without any further development effort.

    “The NUIA SDK also connects to other SDKs, such as the Intel Perceptual Computing SDK. It allows the creation of multisensor-optimized applications and the enhancement of legacy applications without the need for in-depth sensor knowledge.”

    Another modality used within NUIA Imagine is eye tracking. Odörfer notes, “Eye tracking is a very important modality in the NUIA SDK, and we worked very closely with several divisions within Intel and also Tobii Technologies from Sweden to implement it.”

    The following list, adapted from the 4tiitoo website, outlines other functions of the NUIA SDK:

    • The NUIA tools provide integrated development environment wizards, debug tools, and the Extension Creator, a graphical UI tool, to create multimodal extensions for legacy applications, without the need for any source code modification.
    • The NUIA user experience provides a powerful set of libraries, application programming interfaces (APIs), and bindings to several programming languages and frameworks.
    • The NUIA Core provides a message-passing infrastructure for its plug-ins and a control UI.
    • The NUIA Core plug-ins contain the main functionalities and communicate over well-defined messages (with a maximum of abstraction in mind), connect to various SDKs and low-level APIs for retrieving input data, generate legacy events (e.g., keyboard shortcuts, mouse cursor control), and can also implement more complex algorithms and macros.
    • The Interprocess Communication Framework assures communication between the NUIA components and NUIA-enhanced applications.
    • The Context subsystem provides information about all states of the underlying operating system (e.g., currently focused application, user logged in, and screen resolution).
    • The NUIA documentation provides a comprehensive set of tutorials, examples, and support tools.


    Qt Framework

    The team had worked previously with the Qt framework and therefore was familiar with its capabilities. Qt is an event-driven framework: touch events are embedded in the framework, and they can be recognized and handled just like mouse or keyboard events. This functionality provided the team with the ability to create the application such that it can respond to touch events just as it reacts to mouse events.

    Development Process for Ultrabook Platforms

    NUIA Imagine supports several input modalities, including keyboard, mouse, touch screen, speech, gestures, and eye tracking. These modalities offer users a faster and more immersive experience. The team determined which modalities to include based on case discussions of typical user situations.

    Touch and Gestures

    As part of the focus on natural user experience, the team incorporated touch and 3D hand gestures. Eineder explains, “Touch and gestures are more natural to humans because these actions are part of our daily interaction with other people and things. Human eye-hand coordination is optimized for these kinds of movements (such as swiping left or right), rather than pressing different keys.”

    NUIA Imagine supports touch by using Qt-based touch events. To determine which touch optimizations to make, the team analyzed which touch gestures were most intuitive to use with the application and were already known to users based on their experience with smartphones and tablets. They tested the optimizations with people not involved in the development process.

    One optimization was improving the recognition of swipe gestures. The challenge inherent with this modification, says Eineder, is, “Every user performs gestures a little differently; however, the application needs to recognize all of them.” She adds, “We did a lot of testing regarding the time span after which a gesture is recognized. After that, we fine-tuned the variables responsible for the detection process. This adjustment made the touch feature much more intuitive to use.” The variables are used to define the time between the “touch begin,” “touch update,” and “touch end.” The correlation among the three was fine-tuned and user tested for more accurate touch recognition.

    As another input modality, 3D gestures are used to control the main functionalities. The 3D gestures supported within NUIA Imagine are swipe left for next image, swipe right for previous image, and swipe up to add the current image to the active album. These three gestures come from the OpenNI* software. Other gestures are possible, but given the time parameters of the challenge, the team decided to implement just those three.

    Gestures are recognized in the application with the NUIA SDK. Bastian Wolfgruber, chief application developer, says, “We use OpenNI to track the gestures, and then wire the NUIA Core. The gesture commands are sent to the application, which reacts to those gestures.” He adds, “There’s no need to calibrate. The user just holds up an arm; when it is recognized, the user can do the gestures.”

    Voice Recognition

    NUIA Imagine uses the speech-recognition software from Nuance. All major interactions can be triggered by speech. The application recognizes seven voice commands: “next,” “previous,” “add,” “delete,” “minimize,” “maximize,” and “rotate.” The team wanted the voice modality to be simple and intuitive, so users could begin interacting with the application without reading application documentation.

    To arrive at the decision to use voice recognition, the team discussed different possibilities for the main use case. Odörfer says, “Using voice recognition is an elegant way to command an application without actually sitting directly in front of the computer. Speech is a natural communication, like gestures or eye movements, in contrast to the standard existing technologies like the keyboard or the mouse.”

    The voice-recognition modality currently operates only in English. However, the application is set up to be multilanguage. Odörfer observes, “To extend the languages, we would only need to implement new dictionary files because we use a speech-recognition engine. Using the Nuance Framework, you just add, for example, a German dictionary file, and then the application also reacts to German commands.”

    Eye Tracking

    In addition to keyboard and mouse, touch, gestures, and voice recognition, NUIA Imagine can be controlled via eye tracking. Odörfer comments, “Eye tracking is an important modality in the NUIA SDK.”

    The application allows eye tracking to indicate which element the user wants to interact with. Eye tracking can also be combined with voice commands and other modalities. For example, a user could look at any picture in a gallery, say, “Add,” and the picture is added to the album.

    As another example, if the user looks at the Workbench, three images are available: the main picture, the previous image, and the next image. If the user looks at the next image, it automatically moves to the main position. However, says Odörfer, “Auto-gaze actions are not always intended. For example, if you look at a Delete button, you might not want to trigger the action immediately. So, in most cases, the user performs the triggering action with a specific key on the keyboard, the middle mouse button, or other intentional triggering.”

    Odörfer adds, “Eye tracking will not completely replace other modalities, but in combination with other modalities, it greatly enhances work on a next-generation computer.”

    Eineder notes, “It makes it more comfortable because you do not have to use a keyboard or mouse. You can choose, and you can lay back and relax. On the business side, it’s more productive because you can look at a menu and open it while keeping your hands on the keyboard.”

    Eye tracking is intuitive for users. Odörfer says, “It takes most users maybe half a minute or a minute, and then they completely adapt. At first, they think they would have to look differently from the way they normally do, but actually users just look as they always look at their screen, and the system performs an action without the user touching anything.” The team performed user testing to ensure that typical users would understand the eye tracking actions as an appropriate response from the application. Odörfer says, “The idea is to support the user, and there is an advantage to using eye tracking for this.”

    An eye tracking peripheral and the NUIA Software Suite must be installed to use the eye-tracking function. When the application is launched for the first time, the user must do a 30-second calibration to enable eye tracking. Odörfer says, “The current generation of eye tracking has a level of accuracy of 0.5 degrees, which is, in a standard operating mode, something like 15 or 20 pixels on the monitor, similar to touch screen accuracy. So, users cannot control small buttons, which are used in some desktop applications, but they can easily control applications that are optimized for touch screens or Windows Store applications because the buttons are large enough.

    In the NUIA SDK, we have components that understand which element is below or maybe close by, and then click this element, even if the exact gaze position is not on the element. This is similar to using an Android* or iPad* tablet and touching a browser link but not hitting it exactly. The browser checks to see if there is a link close by. If there is a link, it activates the closest one to the touch point.”

    As a demonstration of this technology, 4tiitoo partnered with Intel and Tobii to enable the game Minecraft* from Mojang to be controlled with eye tracking using the native NUIA SDK parts. This version of the game was presented in the Intel booth at MINECON 2012.

    Eineder comments, “In general, with eye tracking, there are a whole lot of possibilities that will come up. For example, you can easily control the Windows 8 Start screen with it. As soon as this technology is made available in Ultrabook [devices] or in desktops, the interfaces will adapt bit by bit, and the whole way we work with computers will actually change.”

    Challenges and Opportunities

    The team’s development process was not without challenges. Wolfgruber says, “We found it challenging to keep the Workbench, the maximized areas, the albums etc., synchronized with the underlying database, so that the right images are in the right position. Also, keeping the application running smoothly, even with big data, needed deeper attention.”

    Through the development process, key opportunities included:

    • Creating a photo-organizing application that doesn’t require the user to sit uncomfortably at a desk for a long period of time
    • Developing an application that could work with a variety of operating systems
    • Finding the right voice recognition software
    • Determining which modalities and commands to include for the fastest and most immersive user experience
    • Implementing an abstraction layer for commands
    • Fine-tuning input recognition

    The Ultrabook Experience Software Challenge

    In developing for Ultrabook devices, the 4tiitoo team was most impressed with their touch capabilities and sensors as well as the thin design. As first place winners of the Ultrabook Experience Software Challenge, the team clearly made good use of these features.

    For the 2012 Ultrabook Experience Software Challenge, EMEA-based software developers and independent software vendors were invited to submit their creative ideas for software applications that take advantage of the latest Ultrabook functionality, including touch, gesture, and voice recognition. The objective was to foster innovation and developer creativity for a more immersive and intuitive user experience with Ultrabook devices. Thirty participants were selected, with nominees from 11 different countries: the United Kingdom, Spain, Italy, Germany, the Netherlands, Russia, Romania, Israel, France, Greece, and Malta. Each participant received an Ultrabook Software Development Platform and six weeks to finish the application. The jury consisted of engineering, marketing, and retail representatives within Intel.

    In terms of next steps, the team hopes to enhance NUIA Imagine with additional basic editing tools, deeper integration of speech control (e.g., voice tagging of pictures), and integration of social media and cloud functionalities.

    Summary

    The development company 4tiitoo was selected to participate (and ultimately won first place) in the Ultrabook Experience Software Challenge. For the challenge, the company developed NUIA Imagine, an application that helps users organize photos into albums. Users can provide input to the application with keyboard and mouse, touch, voice recognition, and eye tracking, enabling them to choose the most comfortable and natural way to interact with the software. The team decided to make NUIA Imagine a desktop application so that it would be available for use with as many operating systems as possible. The team used the Qt framework and the NUIA SDK to program the application. The types of input and the commands available in the application were based on those that would be most intuitive for typical users based on previous experiences with smartphones, tablets, and other software. Eye tracking is the newest technology used in the application. The most challenging part of the development process was keeping the app running smoothly even with big data.

    Company

    4tiitoo AG is a pioneer in developing software solutions focused on natural user experience and business models for next-generation computing devices. The company was founded in 2007 to bring a more intuitive and natural user experience to daily computer interaction.

    With a focus on touch at the time, 4tiitoo launched the tablet PC, WeTab, in 2010. Since then, the company has extended development to a multisensor user experience and provides intuitive software solutions across platforms, sensors, and languages.

    4tiitoo’s latest product, the NUIA (Natural User Interaction) Software Suite, offers original equipment manufacturers and sensor vendors a high-level abstraction layer with an extension model that easily enables existing applications for new computing capabilities. For developers, the NUIA technology provides a simple way to create applications based on the comprehensive NUIA SDK.

    About the Author

    Karen Marcus, M.A., is an award-winning technology marketing writer with 16 years of experience. She has developed case studies, brochures, white papers, data sheets, solution briefs, articles, website copy, video scripts, and other documents for such companies as Intel, IBM, Samsung, HP, Amazon Web Services, Microsoft, and EMC. Karen is familiar with a variety of current technologies, including cloud computing, IT outsourcing, enterprise computing, operating systems, application development, digital signage, and personal computing.

    Intel, the Intel logo, and Ultrabook are trademarks of Intel Corporation in the US and/or other countries.

    Copyright © 2013 Intel Corporation. All rights reserved.

    *Other names and brands may be claimed as the property of others.

  • ultrabook
  • Windows 8*
  • desktop
  • applications
  • user experience
  • photo organizing
  • NUIA* Imagine
  • Developers
  • Microsoft Windows* 8
  • Ultrabook™
  • Touch Interfaces
  • User Experience and Design
  • URL
  • Do It Yourself - Chromium Web Application Container

    $
    0
    0

    Table of Contents

    1 Howto Build a Chromium based HTML5 Hybrid Application from scratch on Ultrabook

    This tutorial will walk your through the process of creating a hybrid HTML5/C++ application container.

    A hybrid application allows the developer to have more flexibility accessing computer resources that traditionally have been reserved for native code applications. For exapmle, the developer could access a camera, accelerometer or other hardware sensors.

    Another advantage to a hybrid application is that the developer can package it and submit in to an application store, such as the Intel AppUp Center. The web runtime that is part of the hybrid application ensures that the developer has a know environment for building HTML5 applications.

    The procedure in this tutorial was tested on Windows 7 edition.

    1.1 The Chromium  Project

    The Chromium Project is an enormous collaboration consisting of many tens of thousands of source code files. The project, of course, was originally founded by Google, but now enjoys contributions from many and varied sources. Consequently, the Chromium Web Platform is on the cutting edge of browser and HTML5 technology. This makes it an ideal candidate from which to build our hybrid application container.

    Chromium also has its own source control management tools, called the depot tools. These ease the process of checking out and synchronizating files to the stable or head releases of the repository. The depot tools can manage a repository using either Git or SVN depending on your gclient configuration file. The main tool that we will use during this tutorial is gclient. Let's walk through setting up your environment and then setting up the Chromium depot tools.

    For more information please refer to the Chromium Wiki.

    1.2 Setting Up Your Build Environment

    In order to successfully build the Chromium project, you will need to set up a number of prerequisite software tools, development libraries and several patches. Let's step through each of them.

    1.2.1 Install Visual Studio 2010

    First you'll need to install Microsoft Visual Studio 2010. According to the Chromium wiki this is the preferred toolchain for building Chromium on Windows. Support for Microsoft Visual Studio 2008 will be deprecated in the near future. The Chromium discussion groups have a few posts on attempting to build with the Mingw32 compiler. However it's not supported by the Chromium Project, and gclient does not create project files for this compiler. Some people also attempt to use Cygwin, and have got it working. However, staying within the preferred toolchain is definitely the path of least resistance.

    Microsoft Visual Studio 2010 express can also be used for building the Chromium project. For complete details on this please see the Chromium wiki. There aren't many deviations from what was tested in this article but there are few.

    *info*Download and install a copy of Microsoft Visual Studio from Microsoft's website or through the Microsoft Developer Network. Also you can download the Microsoft Visual C++ 2010 Express for free.*endinfo*

    1.2.2 Install Microsoft Windows SDK for Windows 7 and .NET Framework 4

    Next you'll need to install the Microsoft Windows SDK for Windows 7 and .NET Framework 4. This SDK contains compilers, header files and libraries that you'll need to compile the Chromium Project, and most any Win32 application.

    *info*Download and install the Microsoft Windows SDK for Windows 7 and .NET Framework 4.*endinfo*

    1.2.3 Install June 2010 DirectX SDK

    Next you need to install the DirectX Software Development Kit. DirectX is a build dependency of the Chromium Project and is needed to support accelerated graphics which are often used in HTML5 application.

    1.2.4 Install VS2010 SP1

    Even though it's not the second Tuesday of the month (patch Tuesday), the you will still need a patch. Before Chromium can be compiled, it's a good idea to install Visual Studio 2010's Service Patch one.

    1.2.5 Install Your Preferred Version Control System (Git or SVN)

    Since the Chromium Project is available either by Git or SVN, the developer has some flexibility to choose the tools that integrate best into his or her workflow. For the purposes of this article, all we need to do is to synchronize with the repository. So either Git or SVN will do just fine.

    • Installing Git

      Git on Windows has traditionally been difficult and not very well supported. However, the msysgit project has made this project much easier, you can also combine this tool with the TortoiseGit project which provides access to Git from the Windows Explorer context menu.

    • Installing Subversion (SVN)

      If Subversion is your preferred version control system, then the TortoiseSVN is a well-known project that has been around for a number of years. SVN was the original version control system that the Tortoise team integrated into Windows Explorer.

    1.2.6 Install Chromium Depot Tools

    • Install with Git
      Lastly, you will need to install the Chromium depot tools. If you've installed msysgit, you can open up the GitBash application, change directories to your preferred installation directory for the depot tools, and type the following.

      terminal $ cd c:/ $ mkdir chromium $ cd chromium $ git clone https://git.Chromium.org/Chromium/tools/depot_tools.git endterminal

      This will make a copy of the depot tools repository to your local computer.

      For detailed information on how to do this you can see the Install Depot Tools Page on the Chromium wiki.

      Lastly, add the depot tools directory to your Windows Path variable.

    • Install Subversion

      If you're using TortoiseSVN an open Windows Explorer to the directory into which you wish to install the depot tools. right-click and select "SVN Checkout". In the "URL of Repository" text field enter the following URL:

      https://src.Chromium.org/viewvc/chrome/trunk/tools/depot_tools/

      This will make a copy of the depot tools directory from the Chromium subversion source repository.

      When you're finished copying the repository, you will need to add it to the directory to the Windows path variable. For example if you downloaded the depot tools to the directory c:\Chromium\depottools, you will need to add the directoryc:\Chromium\depottools\bin to the Windows PATH environmental variable.

    • Put Depot Tools Directory in the PATH

      info*Add the depot tools directory to your the PATH environment variable*endinfo

    • Installing the dependencies for depot tools

      The first time that you run gclient, it will install the dependencies that it needs to run including: git, python and subversion. After the dependencies are finished installing, a list of all the commands that can be used with gclient are printed. these commands are very similar to other commands that you may have used in Subversion or Git,such as cleanup, fetch, diff, revert, status, sink and update. image 13

      Now to the depot tools are installed and ready to use we can proceed to configuring them and obtaining a copy of the Chromium source code.

    1.3 Download the Chromium Source Code

    Before you do a checkout with gclient, it will speed up the process if you first download a tarball of the Chromium source. The Chromiumn Wiki gives this link to a tarball as a starting point.

    Even though you download a zip archive of the source repository, you should still expect that syncing with the repository will take a long time. If you have an older tarball of the source then go have a nice meal at your favorite restuarant.

    1.4 Configure Your Tools in Preparation for Building Chromium

    For building Chromium there are a number of configurations that need to be done to ensure that the build runs. First, specify what type of project configuration and make files GYP should generate. Second, we'll create some environment variables and use them to configure our projects include and library paths. Third, we will configure gclient and last we will generate the Chromium project files.

    1.4.1 GYP - Generate Your Project

    GYP is a Python tool that allows you to specify build configuration information in a Python dictionary. It's platform independent which is a big advantage from a multi-platform project such as Chromium. GYP can generate platforms specific project descriptions and build scripts, including Microsoft Visual Studio Solution files, Xcode project files and GNU make files.

    • Configure GYP before generating the Visual Studio Solution Files
      In order to build on a Windows platform only a very small amount of configuration needs be done. Specify the version of the Windows SDK by setting the 'msbuildtoolset' variable in %USERPROFILE%\.gyp\include.gypi.

      info mkdir $Env:userprofile\.gyp cd $Env:userprofile\.gyp notepad include.gypi endinfo

      Create the file if needed and copy this into it:

      {
        'target_defaults': {
          'msbuild_toolset': 'Windows7.1SDK',
        },
      }
      

    1.4.2 Create Windows ENV variables

    For the configurations files that we are about to setup, we'll create a couple of environment variables. These act as shorter names to specific directories or as configuration variables.

    Create a Windows environmental variable called DXSDKDIR and set it equal to the location of the Direct X SDK. Create a Windows environmental variable called GYPMSVSVERSION and set it equal to '2010'. This will cause GYP to generator files for Microsoft Visual Studio 2010.

    1.4.3 Add the libaries to your project paths

    Add the include and library paths to the Microsoft.Cpp.Win32.user.props files. It's located in '%LOCALAPPDATA%\Microsoft\MSBuild\v4.0\' directory. On my computer, I'm logged in as the user "intelssg" and the value of %LOCALAPPDATA% is equal to 'c:\Users\intelssg\AppData\Local\'

    Under "Include files" add $(DXSDKDIR)\include. Under "Library files" add $(DXSDKDIR)lib\x86.

    Be sure you are adding both entries to the front of the lists.

    The contents of the file on my Ultrabook looks like this:

    <?xml version="1.0" encoding="utf-8"?> <Project DefaultTargets="Build" ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <PropertyGroup> <IncludePath>$(DXSDKDIR)\include;$(IncludePath);$(WDKDIR)\inc\atl71;$(WDKDIR)\inc\mfc42</IncludePath> <LibraryPath>$(DXSDKDIR)\lib\x86;$(LibraryPath);$(WDKDIR)\lib\ATL\i386</LibraryPath> </PropertyGroup> </Project>

    1.4.4 Configure gclient

    *info*cd %CHROMIUMHOME% $ gclient config http://git.chromium.org/chromium/src.git –git-deps endinfo Configure the gclient by typing 'gclient config'

    Open your .gclient file and insert the following text

    solutions = [
      { "name"        : "src",
        "url"         : "http://src.chromium.org/svn/trunk/src",
        "deps_file"   : "DEPS",
        "managed"     : True,
        "custom_deps" : {
          "src/third_party/WebKit/LayoutTests": None,
          "src/chrome_frame/tools/test/reference_build/chrome": None,
          "src/chrome/tools/test/reference_build/chrome_mac": None,
          "src/chrome/tools/test/reference_build/chrome_win": None,
          "src/chrome/tools/test/reference_build/chrome_linux": None,
        },
        "safesync_url": "",
      },
    ] 
    

    For more information about the initial check out using Git see UsingNewGit

    1.4.5 Generate the Visual Studio Solution Files

    <div class="terminal">gclient runhooks</div>

    1.4.6 Update though gclient

    You can get the source code to the Chromium project from its repository. Here you will want to read over the Chromium wiki page entitled how to get the code. http://www.Chromium.org/developers/how-tos/get-the-code The Google guide to using Git is an excellent source for learning how to wield Git. It can be found at http://code.google.com/p/Chromium/wiki/UsingNewGit. Create a directory to hold your source code. This example assumes c:\Chromiumtrunk, but other names are fine. Important: Make sure the full directory path has no spaces. In a shell window, execute the following commands: cd c:\Chromiumtrunk gclient config https://src.Chromium.org/chrome/trunk/src svn ls https://src.Chromium.org/chrome and permanently accept the SSL certificate. To download the initial code, update your checkout as described below.

    For faster updates gclient can be given a jobs flag liek this.

    For a faster update, use the -j / –jobs flag to update multiple repositories in parallel, for example: gclient sync –jobs 12

    • Get a specific version of the Chromium source code.

      For a production application you will probably want to checkout a specific release of Chromium rather than the bleed edge of the repository.

      For example, if you wanted the source for build 5.0.330.0, the following command would be appropriate:

      gclient config https://src.Chromium.org/chrome/releases/5.0.330.0

    1.5 Compile Chromium

    Open the chrome/chrome.sln solution file in Visual Studio and build the solution. According to the Chromium Wiki: Build Instructions for Windows, "this can take from 10 minutes to 2 hours. More likely 1 hour." However, on my laptop which only has 6GB of RAM and not the recommended 8GB minimum, it took the better part of the day. I started the build at the end of the day and came back to it the following day.

    This will build the entire Chromium source tree, including all of the browser tests. If you'd like to learn more about speeding up the build then see Accelerating the Build on the Chromium Wiki: Build Instructions for Windows.

    1.5.1 The Content Shell

    The Chromium Content module is located src\content directory of the Chromium project, and it contains all the functionality to render a page using a multi-process sandboxed browser. This includes the features of the web platform, HTML5, JavaScript, CSS and GPU acceleration.

    The Chromium Content module exists to isolate developers from in the inner workings for page rendering, and to build the Content API that is the preferred way that developers can access content functionality. The Content module should rely completely on the Content API and be separate from other parts of the system and other API calls.

    The Content Shell is a minimal application that embeds the content module to render web pages. This also means that developer looking to embedded Chromium can begin with the Content module as use the Content Shell application as a guide for their own embedding projects.

    For more information about the Chromium Content Module or the Content API, you can visit http://www.Chromium.org/developers/content-module and http://www.Chromium.org/developers/content-module/content-api

    1.5.2 Modifications to Chromium

    Your first task will be to create a directory that hosts the starting point of your HTML5 application.

    In the directory, %CHROMIUMROOT%\src\content\shell open "shellbrowsermainparts.cc" and go to line 32 in the GetStartupURL() functions. Change the startup URL from "http://www.google.com/" to "///__app/index.html", or whatever you like. This will allow you to create a directory named "_app" (or what you like) in the same directory as the contentshell.exe that contains an entry point for your HTML5 application.

    Next, we'll comment out the browser chrome (menus, buttons, toolbars, etc …) so that the contentshell.exe is nothing but a blank canvas for you HTML5 application.

    In the same directory, %CHROMIUMROOT%\src\content\shell open "shellwin.cc" and go to line 30.

    Change: const int kURLBarHeight = 24; const int kURLBarHeight = 0;

    Then jump down to line 110 which is:

    HWND hwnd;
    

    Begin a multi-line comment.

    /* HWND hwnd;
    

    Jump down to line 146, just before the ShowWindow function call.

    ShowWindow(window_, SW_SHOW);
    

    and end the comment and set the menu to null

    */
    SetMenu(window_, NULL);
    ShowWindow(window_, SW_SHOW);
    

    Congratulations, you may now compile.

    1.6 Extracting the HTML5 Hybrid Application

    Now that you have compiled the contentshell, we need to find the files that are needed to create a standalone HTML5 hybrid application container.

    Change directory %CHROMIUMROOT%\src\build\Release

    and create a new directory named hybridApp c:\tmp\hybridApp

    We will copy the main executable file and it's dependencies from the Chromium release build directory to our newly created, standalone directory.

    Copy these files: contentshell.exe - the main application which I renamed hybridApp.exe libEGL.dll libGLESv2.dll contentshell.pak avcodec-54.dll avformat-54.dll avutil-51.dll icudt.dll

    1.7 Conclusion

    Congratulations, you now have a standalone directory with a HTML5 Hybrid Application Container.

    Before we end the article though, it is useful be able to understand what the level of HTML5 compliance is in your new hybrid application environment. #+HTMLCONTAINERCLASS terminal

    hybridApp.exe http://html5test.com/ - View the compatibility and HTML5 compliance of the hybrid app container.
    

    Also see Peter Beverloo's list of command line options that you can use when starting Chromium. Just a warning though, most of these swtich apply to both Chromium and the content shell, but not all of them.

  • html5 hybrid application container chromium
  • Developers
  • Intel AppUp® Developers
  • Partners
  • Professors
  • Students
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • HTML5
  • Tablet
  • Ultrabook™
  • C/C++
  • HTML5
  • JavaScript*
  • Advanced
  • Intermediate
  • Intel® HTML5 Development Environment
  • Development Tools
  • Game Development
  • Microsoft Windows* 8 Desktop
  • Mobility
  • Porting
  • URL
  • Code Sample
  • Getting started
  • Updating Your Desktop Application for Windows* 8

    $
    0
    0

    Download as PDF

    By Bill Sempf

    Windows* Store apps in the Microsoft design style are all the rage right now, but many of us have existing Desktop applications that are going to stay Desktop applications. From network access to user interface (UI) considerations, there are a host of reasons to update a Desktop application to extend its useful life while you explore your options with the Windows Store. But how to keep the application looking and acting up to date in the new modern application world? Fortunately, a host of new, useful features will help to make Windows 7 apps more at home on Windows 8, including touch application programming interfaces (APIs) and new storage features. This article provides some straightforward recommendations for updating your Desktop app for Windows 8, no matter if it is running on an actual desktop, an Ultrabook™ device, or a tablet.

    Embrace Touch

    One of the most visible changes in Windows 8 from the application developer’s point of view is the inclusion of touch. Windows Store apps are largely touch driven, and the same can be true of Desktop apps.

    Working in Windows 8 on contemporary hardware is an interesting transition, as a touch-enabled laptop changes the way users work with the machine. Getting to the charms bar has a keyboard shortcut (Windows+C), for example, but very quickly, one becomes used to reaching up and swiping from the right edge of the screen with a quick flip of the thumb.

    Desktop applications can embrace touch, as well. The best current example of this is Microsoft Office 2013, which has a Touch/Mouse Mode button at the top of the quick access toolbar. The usual Ribbon in Microsoft Office in Figure 1 is much better suited to a mouse.


    Figure 1.The Mouse Ribbon in Microsoft Office 2013

    With a change to Touch mode, the space between icons—and even the style of icons—changes dramatically. You can see that change in Figure 2.


    Figure 2. The Touch Ribbon in Microsoft Office 2013

    The difference is in the size guidelines. Microsoft recommends that to support touch interaction, touch points should be at least 50 pixels wide and high, with an 10-pixel gutter between targets. This guideline supports the majority of users in the majority of situations. So handling touch in an application is really a matter of managing the size of your targets, at least in Touch mode, like Microsoft Office has done.

    Consider the Microsoft Design Style

    Microsoft has developed a unique design style for Windows Store apps. In fact, the change in user interaction with Windows Store apps is arguably the largest single change in the new model. Applications that don’t use the style guidelines are rejected from the Windows Store.

    A Desktop app isn’t expected to follow the Microsoft design style. Even if it is going into the Windows Store for marketing, there is no declaration that you have to use the new design style. In fact some of the design guidelines are pretty difficult to implement in a Desktop app.

    That said, some tenants of the Microsoft design style make sense for a Desktop app. Revisiting the app layout and information architecture is worth considering. Reducing chrome and unnecessary UI elements is another good idea.

    Clear Up Your Information Hierarchy
    Using typography to clearly show users the information hierarchy is a good idea no matter whether you are working in a Windows Store app, a Desktop app, or a website. Microsoft design style defines a typographic ramp to clearly define for the user what the most important thing on any given page is. The typographic ramp is shown in Figure 3.


    Figure 3.The typographic ramp

    While doing this, consider your ontology. Information architecture is key in application design, and using the typographic ramp makes you consider on what tier each piece of information in your application sits.

    Reduce Chrome
    Increasing space between buttons often means reducing the number of buttons used in the interface. To really embrace touch, developers need to reduce the number of options in front of the user at any one time.

    Desktop applications have too many buttons, anyway. Look at GIMP, in Figure 4.


    Figure 4.Gimp and its buttons

    Alternatively, look at Trimble SketchUp*, a Windows Store app, shown in Figure 5. Although it is unlikely that a desktop app could go to that level, considering streamlining of the UI is a good step toward a more Windows design style experience.


    Figure 5.Trimble SketchUp*

    Open Up Your Layout
    As discussed, making more space between your elements can make the whole app look better, take advantage of higher-resolution screens, and make touch-enabled screens a benefit to your users. The application cannot depend on touch because the mouse and keyboard are still in use and will be for a while. However, if you take touch into consideration while slimming up your information architecture and reducing unnecessary UI elements, you’ll find that your interface has a lot more space to play, and your users will be happier.

    Head to the Clouds

    Modern applications are connected. They use information from a variety of sources and are available anywhere, anytime, on any device. Cloud computing is information provided as a utility. That’s different from a hosted web application. Using service-based information access is the core of the modern application.

    Modern Applications
    A modern application is one that uses cloud services to maintain the data and state of an application across devices. The promise is that no matter whether users are using a PC, a laptop, a tablet, or a phone, all of the data for the application remains in sync.

    The modern application name was coined for tablet applications (especially Windows 8 Store apps), but the same term can apply to an existing desktop app. Storage can be maintained in a central system, and business logic can be moved to an online middleware system.

    Use Windows Azure
    Windows Azure* storage is a great place to keep your stuff. You have three options for keeping data in Azure. SQL Database is exactly what it sounds like: a Microsoft SQL Server* database in the sky. Azure Tables are a NoSQL-style solution to storage. Just make a put call with data, and it’s there. Blob storage is the simplest of the three: It just stores big blocks of stuff and doesn’t care what it is.

    SQL Database provides the kind of enterprise data management that corporate developers are used to. All of the transaction processing, schema management, and pinpoint control come with the massive scaling that accompanies the use of a cloud-hosted solution. In addition, SQL Federation provides easy movement of data between corporate SQL Server installations and the cloud-based SQL Database.

    Azure Tables are unstructured blocks of data broken into three types: accounts, tables, and entities. The account represents an application. The table represents a logical grouping of data. An entity represents . . . well, anything. The application determines what goes in an entity. Figure 6 shows an example.


    Figure 6.Azure Tables architecture

    Tables can be a fantastic simple storage system for an application that just needs to keep some data in the cloud.

    Move Business Logic to Services
    Another option toward making a desktop application more modern is simply moving business logic that can’t be easily built in a cross-platform way with JavaScript* to the cloud. Windows Azure is helpful for that, but there isn’t much of a platform for templating the services. Developers are a bit more on their own, but the benefit is that the logic doesn’t have to be written over and over for other platforms.

    Manage the Life Cycle

    Windows 8 allows for a new level of interaction between low-level operating system operations and desktop applications. Windows Store apps are totally managed by the operating system, which is not appropriate for a Desktop app. However, some level of interaction between the operating system and the application gives the user an experience of having things “taken care of,” and that is what we’re shooting for.

    Access Logging
    Knowing when a user logs in can be a real boon for security and user experience alike, especially when you just want to let the user know that you are paying attention. That information is available to Desktop applications through User Access Logging in Windows 8.

    Saving user access to a server is available in ual.h in the Windows software development kit (SDK). The application will need to store information about the IP address that is accessing the application in a data blob, and then pass it to the UalInstrument function, as the following code snippet shows:

        UAL_DATA_BLOB ualDataBlob;
        ZeroMemory(&ualDataBlob, sizeof(UAL_DATA_BLOB));
        ualDataBlob.Size = sizeof(UAL_DATA_BLOB);
        ualDataBlob.RoleGuid = RoleIdentifier;
        ualDataBlob.TenantId = TenantIdentifier;
     
        UalRegisterProduct(L"MyProduct", L"UserRole", L"{3D1A8E20-AD01-457B-B044-79113F30C54C}");
     
        if (S_OK == UalStart(&ualDataBlob))
        {
            ualDataBlob.Address.ss_family = AF_INET;
            InetPton(AF_INET, ip_number, &(reinterpret_cast<SOCKADDR_IN *>(&ualDataBlob.Address)->sin_addr));
            UalInstrument(&ualDataBlob);
        }
    

    The application can then use Windows Management Instrumentation to get information from the event log about user access at the server level and use it in the UI. For instance, Google Mail informs the user about access from other IP addresses right on the main mail page. This kind of information is good for both reference and security.

    Machine State Management
    To a user, the machine appears to be on or off. Really, though, machines can be in any number of other states, and that number grew a lot in Windows 8. A Desktop application can use that information to its advantage when deciding how to manage its own state by checking SYSTEM_POWER_STATE. Some of the states a machine can be in are shown in Table 1.

    Table 1.Machine states available in Windows 8

    Machine State

    User view

    Power usage

    S0

    Working

    Fully on, everything working

    S1

    Sleep

    Appears off but quickly resumable

    S2

     

    Appears off and monitor off

    S3

     

     Appears off and hard drive stilled

    S4

    Hibernation

    Memory saved to hard drive

    S5

    Soft off

    Only resumable by LAN or power

    G3

    Mechanical off

    Power off to all components

    In Windows 8, hybrid shutdown (S4) stops user sessions, but the contents of kernel sessions are written to hard disk. This enables faster startup. You can use that fact to your advantage when checking power state on launch in Microsoft .NET and C++, although in JavaScript, that information isn’t yet available.

    Got Games?

    In the world of gaming, most of the improvements available in the Windows Store for Windows 8 are also available to Desktop apps. For example, Microsoft XAudio2 is a new set of sound APIs available as part of the Windows 8 SDK.

    Games that are using Microsoft DirectSound* on the Windows Desktop platform should certainly look at XAudio2. XAudio2 now supports per-voice filtering and digital signal processing effects as well as submixing with voices. New audio formats are supported, and runtime decompression of compressed audio is finally available. This brings the Windows game sound up to par with the awesome video control in Microsoft DirectX*. Also, the XAudio2 APIs will not block the audio processing engine—a critical feature for the development of crash-free desktop games.

    Microsoft XInput lets an application take input from the Xbox* controller from Windows. Using an Xbox controller in a Windows game takes user interaction to a whole new level. XInput has been enabling Xbox controllers for Windows since Windows XP and has a new 1.4 version for Windows 8. The new version supports upcoming features of the controller—force feedback, wireless, voice, plug-in devices, and navigation buttons. You can find XInput 1.4 in the Windows 8 SDK.

    Use Sensors

    Although sensors are more a mobile development feature, desktop apps run on Ultrabook and tablet devices, too. Desktop-style apps likely can be improved with some integration with the sensor array in an Ultrabook device, laptop, or tablet.

    Windows uses physical sensors to provide some logical sensor objects for application use. These logical objects include:

    • Light sensor. How much light is in the user’s environment
    • Global positioning system. Uses the Global Positioning Satellite network
    • Accelerometer. Determines changes in the movement of the device
    • Compass. Magnetic direction changes
    • Orientation. General sensor for figuring out how the user is viewing the device

    Of course, not all sensors are available on all devices, so take care. That said, integration with the machine at the sensor level gives the user a much more modern experience.

    Do Data Differently

    Data access is a moving target in modern applications. Windows Store apps can’t directly connect to most databases and require use of a service layer for most storage options. Big data and huge-scale reporting are changing the face of both storage and access. The Windows API is changing a bit to support this movement in access options.

    Compression API
    Moving data into and out of applications is time-consuming and expensive. Use of compression can mitigate these problems, and the Compression API is the first low-level toolset for compression in Windows. It supports:

    • MSZIP
    • XPRESS
    • XPRESS_HUFF
    • LZMS

    Storage Management API
    In previous versions of Windows, the Virtual Disk Service supported a subset of storage options. In Windows 8, the Storage Management API provides an interface to disks and disk-like units that implement a storage management provider. Windows PowerShell* is really the target for the Storage Management API, but if an application is file-centric and focused on the enterprise, there will still be a use for the interface.

    Conclusion

    Existing Desktop apps do have a place in the new Windows 8 ecosystem. With a little updating for touch interaction, cloud storage, and new user interaction APIs, existing apps are ready for listing in the Windows Store. Get the Windows App Certification Kit today, run your existing Desktop app through it, and see what has to change. Then, implement the recommendations above to be ready for Windows 8!

    References

    About the Author

    In 1992, Bill Sempf was working as a systems administrator for The Ohio State University and formalized his career-long association with internetworking. While working for one of the first ISPs in Columbus, Ohio, in 1995, he built the second major web-based shopping center, Americash Mall, using Adobe ColdFusion* and Oracle. Bill’s focus started to turn to security around the turn of the century. Internet-driven viruses were becoming the norm by this time, and applications were susceptible to attack like never before. In 2003, Bill wrote the security and deployment chapters of the often-referenced Professional ASP.NET Web Services for Wrox and began his career in pen testing and threat modeling with a web services analysis for the State of Ohio. Currently, Bill is working as a security-minded software architect specializing in the Microsoft space. He has recently designed a global architecture for a telecommunications web portal, modeled threats for a global travel provider, and provided identity policy and governance for the State of Ohio. In addition, he is actively publishing, with the latest being Windows 8 Application Development with HTML5 for Dummies.

     Intel, the Intel logo, and Ultrabook are trademarks of Intel Corporation in the US and/or other countries.

    Copyright © 2013 Intel Corporation. All rights reserved.

    *Other names and brands may be claimed as the property of others.

  • ultrabook
  • Windows* 8
  • touch
  • Windows store
  • Apps
  • User Interface
  • UI
  • desktop
  • Developers
  • Microsoft Windows* 8
  • Windows*
  • Touch Interfaces
  • User Experience and Design
  • URL

  • Intel® Trace Analyzer and Collector Guides

    $
    0
    0

    This is currently a placeholder for Intel® Trace Analyzer and Collector usage guides.  Until articles are added, please visit the Intel® Trace Analyzer and Collector product page.  You can also view the documentation.

  • Developers
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Server
  • Windows*
  • Beginner
  • Intel® Trace Analyzer and Collector
  • Message Passing Interface
  • Cluster Computing
  • Development Tools
  • Optimization
  • Parallel Computing
  • URL
  • Improving performance
  • Learning Lab
  • Calculating estimated call counts with Intel® VTune™ Amplifier XE 2013

    $
    0
    0

    When you profile your software with VTune™ Amplifier XE you often start from looking at the top function hotspots list. This allows you to see what functions are spending CPU resources, so you can focus your optimization efforts.

    Function call counts can provide some additional information to assist in further optimization.

    A hotspot function’s CPU time is a measure of overall time spent there during a collection. There may be multiple calls to a function, some of longer duration and some shorter. If you know call counts along with CPU time/clock ticks, you can then calculate the clock ticks spent in a function during each call. Depending on the call counts you may choose different optimization techniques:

    • If you’re thinking about introducing parallelism, you can do it inside a heavy function. 
    • If time-per-call is small, it may make sense to move your parallel construction to a higher level in the function call stack. 
    • Also don’t forget about inlining - it makes sense for functions with significant call count and small time-per-call, because function invocation overhead may be big enough.

    VTune Amplifier XE 2013 can provide you call count information. This metric is available for Hardware Event-based Sampling analysis types, such as Lightweight Hotspots.

    “Collect stacks” and “Estimate call counts” options are required to enable collecting call counts:

    This also can be done from the command line:

    $ amplxe-cl -collect lightweight-hotspots -knob enable-stack-collection=true -knob enable-call-counts=true -- <target_application>

    With these options you’ll be able to see the estimated call counts. See how it looks in this Bottom-up view: 

    In the Top-down view you can see total and self call counts:

    If you switch to the “Hardware Event Counts” viewpoint, you can easily calculate events per call, e.g. clock ticks per single function run:


    Things to remember about the “estimated call counts” feature

    1. Call counts are estimated – this means they are statistically calculated. It is not exact call count values. A zero value just means that the function was called just a relatively few times and might still be hundreds or even thousands of calls.
    2. The call count column often appears in the right part of the grid, so it is not shown initially – scroll right to find it, and move to the left if needed (as was done on the screenshots above).
    3. Call count collection introduces additional overhead of 20% and more. Though it is much lower than would be if exact call counts were collected with binary instrumentation.
    4. Collecting call count info significantly increases profile data volume. This leads to increased size of analysis results and significantly increased RAM usage when browsing the results in the GUI.
    5. If you experience significant slowdown or too high memory usage, think about decreasing analysis data – e.g. increase “sample after value” for collected events (by creating a custom analysis type).

    A few words about technology

    Collecting estimated call counts is based on BTS (Branch Trace Store) usage. This is hardware functionality in Intel® processors to automatically store information into a memory buffer about all taken branches. Function calls are considered as branches and are taken from this buffer.

    If a function is hot, it is statistically visible. So it is interrupted by a performance monitoring interrupt (PMI) which occurs once a hardware event counter overflows. Once interrupted, the collection of branches is initiated, and when the memory buffer containing branch records overflows, the information is saved on disk (into a trace file) upon reception of a branch tracing interrupt (BTI). Then collection waits for the next sample and gathers another “branch bunch”, and so on.

    After collection is finished, trace files are analyzed and call counts are separated from other branching info. Taking into account call counts in trace files, the frequency of samples and the total number of branches in a program VTune Amplifier XE estimates statistical call counts. Rarely called functions appear only in few samples or don’t appear at all – estimating call counts from these data would be too far from reality, so call counts for them are shown as zeros.

    Conclusion

    The estimated call count feature of VTune Amplifier XE allows you to detect frequently called functions, so you can make informed decisions regarding inlining, introducing parallel constructions and data decomposition. Statistical collection technology adds lower overhead comparing to exact call count collecting methods. But the overhead may be significant, especially in terms of memory usage when results are explored. Be sure to take that into account.

  • "VTune Amplifier XE"
  • Developers
  • Intel AppUp® Developers
  • Partners
  • Professors
  • Students
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Unix*
  • Windows*
  • .NET*
  • C#
  • C/C++
  • Fortran
  • Java*
  • Advanced
  • Beginner
  • Intermediate
  • Intel® Parallel Studio XE
  • Intel® VTune™ Amplifier XE
  • Optimization
  • Parallel Computing
  • URL
  • Improving performance
  • Multithread development
  • Cars, Boats and Planes: Optimizing Sonic & All-Stars Racing Transformed for Ultrabook™ PCs with Touch and Sensors

    $
    0
    0

    by Brad Hill and Leigh Davies.

    Download Article


    Download Cars, Boats and Planes: Optimizing Sonic & All-Stars Racing Transformed for Ultrabook™ PCs with Touch and Sensors [PDF 1.6MB]

    Abstract


    Sonic & All-Stars Racing Transformed is a game from Sumo Digital* published by Sega* on the PC and several other gaming platforms including Xbox* 360, PS3*, PS Vita* and Wii* U. Optimizing the PC version of the game proved a sizable task for Sumo Digital that yielded additional benefits for other target platforms. Intel® engineers worked with Sumo Digital to ensure the PC version runs on par with the other platforms and is optimized to take full advantage of PC technology. This case study describes the techniques used to identify and overcome some of the obstacles encountered in adding sensor control, touch support, and Intel® 3rd Generation Core™ optimization. GPUView and Intel® Graphics Performance Analyzers (Intel® GPA) were used to identify GPU stall periods and track down the causes. Eliminating these nearly doubled the average frame rate from 13FPS to over 25FPS on our test PC.


    Figure 1.Screenshot of initial gameplay displaying frame rate of 13 FPS in upper-left corner

    Overview

    Sonic & All-Stars Racing Transformed is a fast-paced cross-platform multiplayer racing game. This case study illustrates how the issues in performance and creating a balanced control experience were identified, addressed, and resolved. Similar approaches can be useful in your own game development.

    To maximize the reach of this game, Sumo Digital needed to port it to PC and touch-enabled and sensor-enabled devices including tablets and Ultrabooks. Some touch and tilt support was available from another game, but time constraints had limited previous development. This project allowed us to build on that foundation.

    In developing a PC title, it’s most efficient to support the widest range of platforms and environments possible. Ideally, the game should run on all versions of the Windows* OS that are still used by a considerable number of users - everything from XP* up to and including Windows 8. This was an important factor in the choice of touch API, as the Windows 7 version was more widely supported than that of Windows 8. Since the Windows 7 API is ‘lower level’ with access to the raw touch data, this allowed existing touch functionality from the console versions to be repurposed.

    The goal of setting a high bar for a PC product targeting processor graphics gave us a new opportunity: to go beyond simply tweaking it for generic PC gaming and instead, fully optimize it for Intel 3rd Generation Core. The main tools we used for performance analysis were GPUView, a tool to monitor GPU and CPU activity and Intel Graphic Performance Analyzers (Intel GPA). These utilities proved invaluable in identifying and addressing GPU stalls which were causing drops in frame rate.

    Touch Controls


    None of the console versions were purely controlled by touch, so the design starting point was a UI that treated touch as an ‘additive’ control system. The need for a touch-only front end lead to the addition of back buttons, large touch zones, and the rework of many screens to enlarge buttons for touch. A more advanced in-game control settings screen was added, as seen in Figure 2.


    Figure 2.Custom controls for touch

    With the removal of a requirement to use a gamepad or keyboard, racing controls are primarily driven by a virtual joystick or tilting the device, augmented by buttons displayed on the touch screen as seen in Figures 2 and 3.


    Figure 3.In-game controls showing virtual joystick and primary action buttons

    To implement touch in a Windows 8 Desktop app with backwards compatibility, there are two event models to choose from: WM_GESTURE and WM_TOUCH. A detailed article on Windows 8 touch input is available in References.

    WM_GESTURE has many gestures already defined, but it is more appropriate for navigation and manipulation than real-time game control for multiple reasons. It uses a time delay to determine whether a touch is a single press or the start of a gesture, such as a pan gesture, and it doesn’t allow for multiple simultaneous gestures to be tracked. Sumo Digital had designed the touch interface to use both hands for independent controls and wanted to repurpose as much of their existing console code as possible.

    The more suitable method was WM_TOUCH, which registers the raw touch events themselves. This allows not only finer control but more robust options as multiple individual fingers can be tracked, limited only by the touch screen hardware itself. This tradeoff was an exchange of gaining more control at the cost of a more complex implementation effort.

    Due to a wide range of devices (and hands!), Sumo Digital opted to go with dynamically repositionable controls, tied to the dominant steering hand’s touch contact point. Using dynamically repositionable controls meant the key controls were always in a place suitable for both the hand size and the posture of the person holding the device and could adapt if the player changed grip on the device while playing.

    Using the older Win7 touch API meant limited touch points, but Sumo Digital had already intended to keep the controls simple and intuitive. The number of buttons was reduced by implementing auto-accelerate and using drift to act as a brake when not turning. Clever clustering of key controls allows one touch zone to be used to detect multiple button presses. Sumo Digital also added simple gesture support; the player can begin a swipe gesture on the stunt button to control in game stunts, which removed the need for a second analogue joystick.

    Sensor Integration


    To make use of the Ultrabook PC’s additional input methods, control was expanded beyond simple touch to include inclinometers for steering the vehicles. These sensors measure the tilt of the device on all 3 axes. Since the vehicles in the game transform among land, sea, and air modes, this tilt control is ideal to seamlessly transform 2-dimensional racing into 3 dimensions when the players take to the air.

    Much of the underlying sensor code was created by Intel. The sensor library directly polls the sensor once per frame providing a very fast response time when detecting changes in the device orientation. Details on the sensor library can be found in the "Blackfoot Blade" case study listed in References with many of the lessons learned in that title benefiting Sonic & All-Stars Racing Transformed. An article with sample code that uses the library is also listed in References.

    With the technical challenges of adding sensor support mostly solved using the Intel libraries, the actual game play issues were expected to be the next area in need of attention. Unfortunately, sensor control raised some interesting problems. First, many users would hold the same device in different ways; some held the device like a wheel, some like a tray. They’d also steer by turning the device in different axes too. The problem wasn’t too hard to solve for the car and boat as the game only has to worry about steering in one axis, but planes were another story. This required dynamic recalibration of the default sensor position, both whilst playing but also at key points, for example when the vehicle transforms or when the game is paused. Dynamic recalibration also handled ‘fringe’ cases like when the device is passed to a friend so they can have a go, or the player lies back in bed, or pauses the game, puts the device down, then comes back and holds the device in a different way.

    A second problem occurred when comparing the responsiveness of touch and sensor controls to the gamepad experience. With touch and sensor input, minor user errors lead to exaggerated negative results. This was remedied by adding a steering assistance mechanic. This is an ‘additive’ system that purely adds a varying amount of input to what the player is inputting, but in a way that doesn’t ‘fight’ the player or play the game for them.

    GPU Optimization


    Once the game’s component systems were largely complete, GPUView was used to check the GPU performance. We noticed that there were significant gaps in the GPU hardware queue, where the graphics are processed and rendered, as depicted in Figure 4. Ideally, the GPU would be running all the time unless deliberately limited to conserve power, constantly queuing new frames while the current frame is being rendered to maximize the frame rate.


    Figure 4.GPUView shows GPU stalls as gaps among the top green bars

    The bars on the top represent GPU activity, with recurring patterns indicating frames. The GPU should be constantly active when running, but here we see gaps of 5-6 milliseconds per frame. It may seem like a small delay, but this constitutes about 20% of the total frame drawing time and makes a significant impact to the game’s frame rate. These gaps coincided with stop start behavior on the CPU. Thus the CPU and GPU were virtually serialized, causing the delays. Using GPUView to investigate the DirectX events around the stall points, it was found that Lock Allocation events were happening that were configured for the CPU to wait for the GPU to complete its work. See Figure 5 for the event details that line up with the red line in Figure 4.


    Figure 5.GPUView metric for the Lock Allocation event most likely to be the cause of the stall.

    GPUView also allows the developer to show details of the memory that was being locked at this point. This is shown below in Figure 6. Note the allocation handle is the same at 0xFFFFFFA800B784330. The important thing to notice is the lock is on a resource that is a D3DDDIFMT_R32F texture format and is 1x1 pixels in size.


    Figure 6.GPUView metric for the memory being locked.

    This was enough information to investigate the likely cause of the lock in GPA. In GPA we could view all the Render Targets and Textures to find anything that was 1x1 and a 32bit Float, we could also look at the API log to find problematic “LockRect” calls that caused the CPU to wait on the GPU. The Lock calls are shown below in Figure 7.


    Figure 7.GPA API log call showing all LockRect calls

    The problem was traced to the CPU polling the GPU for data every frame. In this case, the CPU was waiting until the GPU had rendered data into a 1x1 texture that was being used to calculate the average luminosity of the screen for a technique called tone mapping. The GPU would then sit idle while the CPU calculated the data needed to be used in the Tone Mapping Post processing effect, and then built up enough information to create a new DMA packet of information to send to the GPU and restart the hardware queue. Ideally, the GPU should always have data prepared so that the CPU does not have to wait to retrieve the data.

    This problem was fixed by ensuring the CPU worked on data from a previous frame. The GPU resource was first copied to a CPU readable resource using the DirectX function StretchRect. Two frames later, this resource was locked, ensuring the GPU had completed the work before the CPU requested it. The CPU lockable rendering surface would be selected from several spare surfaces in a “round robin” manner, ensuring that the CPU was never asking for data that the GPU had not yet calculated.


    Figure 8.Optimized code metrics show smoother performance

    As shown in Figure 8, the result is a much smoother frame workload from having removed the gaps in both the GPU and CPU processing.

    The optimization was further enhanced when Sumo Digital streamlined the post processing by combining techniques. The original shadow and lighting calculation system generated and used a stencil buffer for a three-pass system. A new platform-specific version of the code was created using a different set of shader and zbuffer commands that streamlined the processing to a two-pass system without any visual compromise.

    In addition, GPA hardware metrics showed the pixel shaders to be bandwidth-limited in the texture samplers. This code was reworked to allow some of the less complex shaders to pre-calculate the values and store these into unused alpha channels. This allowed use of fewer textures in the post-process shaders, giving a better ratio of math instructions to texture fetch instructions (which introduce latency).

    The combination of the improved post processing, new shadow and lighting system, the elimination of the GPU stall together with many other smaller optimizations resulted in the frame seen in Figure 9. Not only is the frame rate more than doubled, the visual quality has also been improved with higher quality lighting and with additional post processing effects including Ambient Occlusion.


    Figure 9.Screenshot of completed gameplay, with frame rate of 29FPS denoted in the upper-left corner

    Conclusion


    This case study demonstrates some solutions for typical obstacles in creating and optimizing touch-based games. The work done on the PC version allowed Sumo Digital to back port many of the control improvements to other versions of the game. The PC with its larger heavier devices when compared to phones and devices such as the PS Vita raised control issues that weren’t previously noted. Solving these problems benefitted all devices. The self-calibration of the inclinometer happened in time to ship with the PS Vita version and made a big difference to the control. Making the right decisions in implementation of sensors and touch can solve many problems in performance and user experience. Tools such as Intel GPA are vital to find and capitalize on opportunities for optimization, preventing unnecessary delays and taking full advantage of the hardware.

    About the Authors


    Brad Hill is a Software Engineer at Intel in the Developer Relations Division. Brad investigates new technologies on Intel hardware and shares the best methods with software developers via the Intel Developer Zone and at developer conferences. He is currently pursuing a Master of Science degree in Computer Science at Arizona State University.

    Leigh Davies is a senior application engineer at Intel with over 15 years of programming experience in the PC gaming industry. He is a member of the European Visual Computing Software Enabling Team providing technical support to game developers, areas of expertise include 3D graphics and recently touch and sensors.

    References


    Comparing Touch Coding Techniques - Windows 8 Desktop Touch Sample: http://software.intel.com/en-us/articles/comparing-touch-coding-techniques-windows-8-desktop-touch-sample.

    Implementing Touch and Sensors for Windows* 8 Desktop Games: Confetti Interactive’s* experiences developing "Blackfoot Blade": http://software.intel.com/en-us/articles/implementing-touch-and-sensors-for-windows-8-desktop-games-confetti-interactive-s.

    Accessing Microsoft Windows* 8 Desktop Sensors: http://software.intel.com/en-us/articles/accessing-microsoft-windows-8-desktop-sensors

    Test PC Specifications


    Ultrabook, Intel CoreTM i7-3667U CPU @ 2.00Ghz with HD4000 Graphics, 4GB Memory. Windows 8 Pro 64-Bit OS. 5 point Touch support.

    Ultrabook™ products are offered in multiple models. Some models may not be available in your market. Consult your Ultrabook™ manufacturer. For more information and details, visit http://www.intel.com/ultrabook

    *Other names and brands may be claimed as the property of others.

    Copyright© 2013 Intel Corporation. All rights reserved.

    Performance Notice


    For more complete information about performance and benchmark results, visit www.intel.com/benchmarks

  • ultrabook
  • Windows 8*
  • touch
  • sensor
  • GPUView
  • Intel® Graphics Performance Analyzers
  • Intel® GPA
  • Sensor Integration
  • GPU Optimization
  • Developers
  • Microsoft Windows* 8
  • Windows*
  • Intel® Graphics Performance Analyzers
  • Game Development
  • Optimization
  • Sensors
  • Touch Interfaces
  • URL
  • Connecting Software Partners and Hardware Resellers Worldwide: Intel Launches Promotion of Ultrabook™ Enabled Apps

    $
    0
    0

    The arrival of the Intel® Ultrabook™ has opened up exciting opportunities for developers of applications optimized for the touch and sensor capabilities of Windows 8* Ultrabook devices. Intel is promoting additional business development opportunities for Intel® Software Partners by connecting enabled apps for Ultrabook™ to its global hardware reseller community (Intel® Technology Providers).      

    Intel is hosting the first of a series of online training modules covering Ultrabook-enabled apps available for Intel® Technology Providers in more than 16 languages worldwide. Entitled “Inspire Your Customers with Enabled Apps for Ultrabook™ and Windows* 8”, the reseller community is exposed to nine enabled apps and the Ultrabook Device App Showcase, which offers a collection of apps to help inspire their customers. The training will help resellers fulfill credit required for maintaining their membership in the Intel Technology Provider Program. Resellers are also being encouraged to engage with Software Partners directly to explore bundling opportunities of enabled software.

    Ultrabook Apps Being Promoted

    In addition to the apps available in the Ultrabook Device App Showcase, Intel is actively promoting numerous apps related to productivity, creativity and gaming in the reseller training module, including:

    Apps for Productivity 
    Team Viewer* – Online remote desktop control and meeting software
    KeyLemon* – Face and speaker recognition for a convenient, secure log-in and protection
    Ashampoo Snap 6* – Capture screenshot images and video with audio 

    Apps for Creativity
    CyberLink PowerDirector* – PC Magazine’s Choice: Consumer video editing
    Adobe Photoshop CS6* – Imagining magic used by photographers and designers worldwide
    Sony Vegas Pro* – Create professional videos and burn them to DVD/Blu-Ray Discs* 

    Apps for Gaming
    Blackfoot Blade* – Top-down helicopter combat action
    iRacing.com* – Premier online racing simulation game
    Sid Meier’s Civilization V* – The award winning, critically acclaimed turn-based strategy game

    Intel is Dedicated to Building New Opportunities for Software Partners

    Broaden your reach with worldwide opportunities through the new Intel® Ultrabook™. Enable your app today.  For more information: http://software.intel.com/en-us/ultrabook

  • Developers
  • Microsoft Windows* 8
  • Windows*
  • Sensors
  • Touch Interfaces
  • URL
  • Creating a First-Class Touch Interface for Defense Grid: The Awakening

    $
    0
    0

    Abstract

    Defense Grid: The Awakening is a tower-defense style strategy game on PC and other gaming platforms. Hidden Path Entertainment*, the developer of Defense Grid, worked with Intel® Corporation to add a full touch-based interface to the game that ran well on Ultrabook™ PCs and Windows*-capable tablets. This article details the approach Hidden Path took to implement touch, and calls out relevant challenges for those seeking to add a touch-based interface to their games.

    Defense Grid and Ultrabook

    In early 2012, Defense Grid was already a successful game, and was available in retail stores, on Steam*, and on the Xbox* 360. The addictive, puzzle-like tower defense gameplay delivered a lot of fun with a relatively easy-to-play interface. After establishing that they had a proven seller, Hidden Path looked for ways to expand the reach of the game by porting it to additional gaming platforms. The timing of this coincided with a retail wave of highly capable Windows tablets and touch-screen Ultrabook PCs. Intel approached Hidden Path to offer help in updating the PC version of the game to add an intuitive touch interface. The result of this collaboration is a game that fully embraces new PC form factors without compromising the original, highly-effective mouse and keyboard interface.

    Figure 1: Game in Mouse and Keyboard mode

    Figure 2: Game in Touch mode – note additional icons for touch control

    If you look at the differences in Figure 2 vs. Figure 1, you’ll see that the Touch UI adds tower selection buttons on the right edge of the screen, a Fast Forward button at the bottom left, and a Menu/Settings button at the top. Keeping the buttons close to the edges was a key design point offered to provide easy access to the controls when handling a touch based device (like a Tablet or an Ultrabook).

    Figure 3: Tower Placement in Keyboard and Mouse mode

    Figure 4: Tower Placement in Touch mode

    Figures 3 and 4 show a tower placement action. Tower placement in keyboard / mouse mode involves selecting a tile to build on, and then choosing the appropriate tower from the context menu. In touch mode, to avoid going through list boxes, which are hard to access, you start by selecting the correct tower icon, before touching the tile to place it on.

    Making Touchable Interface Elements

    When adding touch to the game, we began by examining the visual design of the layout, seeking to make key UI elements easier to touch, without requiring too much precision. The key UI elements for Defense Grid are tower selection, tower placement, Menu/Pause, and Fast Forward.

    The design moved these elements to the sides of the screen, in anticipation of running on a tablet PC where the user would primarily use their thumbs to activate the UI. A design prototype was made and it tested on a variety of touch-capable devices. It worked very well for tablets. When testing the prototype on a touch-capable Ultrabook, some of the advantages of putting the UI elements on the side of the screen were lost.

    Hidden Path had considered other alternative UI approaches, including an option where the gamer would touch a tile to build on, producing a star burst of available tower choices around the touch area. This was intended to help retain the context of the tower placement action spatially, but it obstructed the neighboring tiles and map. In the end, Hidden Path decided this was too intrusive during gameplay.

    Code Changes to Handle Touch Events

    For Windows 8 Desktop apps like Defense Grid, there are two mutually-exclusive event models that can be used: WM_GESTURE and WM_TOUCH. Defense Grid uses WM_TOUCH, which is the lower-level API. WM_TOUCH requires the game to interpret each touch contact and movement without any automatic gesture detection. WM_GESTURE is greedier. It doesn’t release the message until the gesture is completed. This led to unacceptable lag. For example, a swipe across the screen to pan the map did nothing until the swipe is completed, at which time the map pans. Using WM_TOUCH, the game was able to fluidly handle swipe gestures and have the camera panning follow the finger movement during the gesture.

    Implementing touch using WM_TOUCH was relatively simple for the Hidden Path engineers, but some issues arose that required additional time and effort to resolve. One of the early problems was implementing a satisfying pan and scan of the game map. Initially, the game treated swipe starts as taps, but after testing with actual devices, this was tweaked to recognize the swipe early on, providing an immediate response.

    Also, touch-capable devices have different responsiveness and latency in their reaction to touch. The custom-designed gesture interpreter (built on WM_TOUCH) had to distinguish between taps and swipes, and the distinction relies heavily on the latency of the touch events. Engineers at Hidden Path tested their implementation on a variety of devices and tweaked their gesture recognizer code to work well within a range of expected latencies.

    One last problem came from successfully transitioning between touch input and the keyboard and mouse input. Hidden Path wanted to design the interface so that a player could use either input method and switch between the two. The game was implemented to switch the interface on first detection of a new input mode. For example, if keyboard or mouse was used, the onscreen touch based icons would disappear. They returned on the first touch detection. The tutorials / control help updated dynamically to reflect the current input mode of the user. For example, the game advised you to pinch the screen or scroll the mouse wheel to zoom, based on which mode the game was in.

    Rotation Events

    One of the perks of a tablet or Ultrabook is its ability to function fully in both landscape and portrait mode. Yet, sometimes this amazing feature causes trouble. Some apps and games are designed to work well in only one of the modes. In the case of Defense Grid, you needed to have it in Landscape mode to get a playable experience.  But, Windows 8 allows AutoRotation of the Desktop and all apps that go with it. There wasn’t an obvious way to disable this autorotation if the user chose it.

    This caused the game to crash if the device was rotated when Windows Display AutoRotation was enabled. Even a slight rotation became troublesome. 

    To solve this, we found that there was an unpublished function in Win8 user32.dll we could call to limit orientation to Landscape (or Portrait).

    typedef enum ORIENTATION_PREFERENCE
    {
        ORIENTATION_PREFERENCE_NONE              = 0x0,
        ORIENTATION_PREFERENCE_LANDSCAPE         = 0x1,
        ORIENTATION_PREFERENCE_PORTRAIT          = 0x2,
        ORIENTATION_PREFERENCE_LANDSCAPE_FLIPPED = 0x4,
        ORIENTATION_PREFERENCE_PORTRAIT_FLIPPED  = 0x8
    } ORIENTATION_PREFERENCE;
     
    typedef BOOL (WINAPI *FPtrType)(ORIENTATION_PREFERENCE orientation);
     
    char lpszModuleName[] = "user32.dll";
    HMODULE hModule = LoadLibraryA(lpszModuleName);
    if( hModule != NULL )
    {
        char lpszFunctionName[] = "SetDisplayAutoRotationPreferences";
        FPtrType HookedSDARP = (FPtrType)GetProcAddress(hModule, lpszFunctionName);
        if( HookedSDARP != NULL )
                (*HookedSDARP)( ORIENTATION_PREFERENCE_LANDSCAPE );
    }
    

    Since this is an unpublished API call, the best way to call it is to check and see if this function is available in the current user32.dll, using GetProcAddress(). This will return as true on Win8, but fail on Win 7 and earlier OSes. This works well, as the AutoRotation is only available on Win8.

    Conclusion

    Modifying a successful game like Defense Grid so it used a whole new input method was a gutsy move that opened up the game to a new class of devices and enhanced the experience for consumers with touch-capable PCs. Hidden Path designed a new interface for touch without taking anything away from the existing keyboard and mouse interface. The coding went smoothly beyond the interesting edge cases that were eventually resolved. The end result is the same great gameplay available to a larger market.

    References

    Hidden Path Entertainment: http://www.hiddenpath.com

    Auto-Rotation fix: http://software.intel.com/en-us/blogs/2013/01/10/handling-windows-8-auto-rotate-feature-in-your-application

    About the Author

    Doraisamy Ganeshkumar is a Senior Software Engineer on the Intel Developer Relations team. He helps PC game developers optimize games for Intel products. His current focus is to ensure the best Out of Box gaming experience for PC gamers.

    Erica McEachern is a Technical Writer, a roller derby athlete, and an avid gamer. She rarely uses semicolons, and prefers compound sentences to simple ones. She also believes that coffee is the key to creativity and collaboration.

  • ultrabook
  • Windows 8*
  • defense style strategy game
  • Hidden Path Entertainment*
  • developer of Defense Grid
  • Developers
  • Windows*
  • Graphics
  • Touch Interfaces
  • User Experience and Design
  • URL
  • How to use Boost* uBLAS with Intel® MKL?

    $
    0
    0

    You can take advantage of Intel® MKL, if you are used to uBLAS, by performing BLAS matrix-matrix multiplication in C++ using Intel MKL substitution of Boost uBLAS functions.

    uBLAS pertains to the Boost C++ open-source libraries and provides BLAS functionality for dense, packed, and sparse matrices. The library uses an expression template technique for passing expressions as function arguments, which enables evaluating vector and matrix expressions in one pass without temporary matrices. uBLAS provides two modes:

    -   Debug (safe) mode, default.
    Type and conformance checking is performed.

    -   Release (fast) mode.
    Enabled by the NDEBUG preprocessor symbol.

    The documentation for the Boost uBLAS is available at www.boost.org/.

    Example in this KB article demonstrates how to overload prod() function for substituting uBLAS dense matrix-matrix multiplication with the Intel MKL gemm calls. Though these functions break uBLAS expression templates and introduce temporary matrices, the performance advantage can be considerable for matrix sizes that are not too small (roughly, over 50).

    You do not need to change your source code to use the functions. To call them:

    -   Include the header file mkl_boost_ublas_matrix_prod.hpp in your code (from the attached mkl_and_boost_examples zip file).

    -   Add appropriate Intel MKL libraries to the link line.

     

    Only the following expressions are substituted:

    prod( m1, m2 )

    prod( trans(m1), m2 )

    prod( trans(conj(m1)), m2 )

    prod( conj(trans(m1)), m2 )

    prod( m1, trans(m2) )

    prod( trans(m1), trans(m2) )

    prod( trans(conj(m1)), trans(m2) )

    prod( conj(trans(m1)), trans(m2) )

    prod( m1, trans(conj(m2)) )

    prod( trans(m1), trans(conj(m2)) )

    prod( trans(conj(m1)), trans(conj(m2)) )

    prod( conj(trans(m1)), trans(conj(m2)) )

    prod( m1, conj(trans(m2)) )

    prod( trans(m1), conj(trans(m2)) )

    prod( trans(conj(m1)), conj(trans(m2)) )

    prod( conj(trans(m1)), conj(trans(m2)) )

    These expressions are substituted in the release mode only (with NDEBUG preprocessor symbol defined). Supported uBLAS versions are Boost 1.34.1, 1.35.0, 1.36.0, and 1.37.0

    A code example provided in the attached zip file ublas/ource/sylvester.cpp file illustrates usage of the Intel MKL uBLAS header file for solving a special case of the Sylvester equation.

    To run the Intel MKL ublas examples, specify the BOOST_ROOT parameter in the make command, for instance, when using Boost version 1.37.0:

    make lib32 BOOST_ROOT=<your_path>/boost_1_37_0

     

  • mkl with boost
  • mkl ublas
  • mkl and boost/ublas
  • Linux*
  • OS X*
  • Windows*
  • C/C++
  • Intel® Math Kernel Library
  • URL
  • Pointing the Way: Designing a Stylus-driven Device in a Mobile World

    $
    0
    0

    Download Article

    Pointing the Way: Designing a Stylus-driven Device in a Mobile World [ PDF 676KB ]

    By Benjamin A. Lieberman, Ph.D.

    As infants, we know instinctively to reach out with our bodies to explore the new and exciting world around us. Our first fumbling attempts allow us to learn the advantages and limitations of our “built-in” pointing tools—our hands and fingers. Soon, however, we learn that to have a more effective hold on our world, we need other tools that provide greater precision than our fingers, such as pens for drawing and writing. A similar discovery (or perhaps a rediscovery) is now occurring in the mobile computing space with the introduction of touch-responsive screens and stylus-based control.

    Although touch screens have been in use for decades, only with the recent explosion of hand-held mobile devices has touch truly gone global. However, just as we did when we were children, we have discovered that the finger makes for a poor precision instrument. And so we see the reintroduction of an old friend: the pen. In a mobile computing environment, though, our tools must adapt to our new needs for digital in, and so the pen reverts to an earlier form—the stylus.

    Intel® processor-based Ultrabook™ devices with a stylus are available on the market today, and software applications that use the stylus in appropriately creative ways will have a competitive advantage. A recent worldwide study by Dr. Daria Loi of Intel provides key insights into consumer behaviors with these technologies and user preferences for active stylus and effective stylus-centric app design.

    Human Interaction with Our Environment

    As most three year olds know, our fingers are useful for artistic expression. A nice, clean page and some finger-paints allowed an infinite variety of Jackson Pollock–esque creations. The level of control, immediacy, and tactile feedback our fingers provide lead to rapid understanding of the correct amount of pressure to apply to the canvas, the different viscosity of the paints, and even how using different paints on individual digits made a really interesting series of parallel lines. Seemingly, the only drawback was that every line was exactly finger width, which made fine edges and shading a bit difficult.

    As we got older, we began to refine tool use into specialized functions, one of which—the pencil—allowed us to learn written language. The pencil was simple to use: Place the pointed end on a piece of paper and make a mark. We all rapidly adapted ourselves to the single level of indirection that a writing instrument introduced. We didn’t make the mark with our finger, but instead used the much more accurate pencil. Artistic expression also gained much by this approach.

    And so over time we came full circle, turning these tools into commonly encountered items that require little or no thought to use effectively. If we need to open a door, we use fingers. If we need to drop a note to the spouse, we use a pen. No additional training required.

    Feeling Our Way Through a Mobile Computing World
    Then we invented computing devices. Although there is no question that computers have revolutionized the way humans produce and consume information, for many years, these devices were expensive, complex, and difficult to use effectively. In the early days of computing, the only way for a human to communicate with a computer system was via the equivalent of a typewriter wired to an oscilloscope. Needless to say, this interface was not intuitive. Over time, we have begun to move back to our earlier, more familiar world view.

    The mouse is still a popular method of control for computer systems, and software has been highly optimized for mouse use. However, several problems are inherent in a mouse input device. First, you have the problem of mechanical detection of motion, which requires moving the mouse across a smooth surface. Even trackballs require extra desk space to operate. As devices grow smaller and users desire to be unleashed from their desktops, requiring a mouse for control becomes a significant problem for mobile devices. Something more basic is needed.

    With the introduction of the touch-sensitive screen, we are able to take direct control over our tools by using nothing more than a finger. There is no question that for phones and tablets, finger-based input has found wide acceptance. But as we discovered earlier with the finger painting example, it is difficult to have a precise interaction simply using the blunt finger tip. And once again, the answer to the problem is the stylus. However, we have learned some lessons from earlier attempts at reintroducing a stylus—we need to change this simple tool to better fit into a mobile computing environment. Effective stylus design, software integration, and industrial design factors are key to the adoption of the stylus back into the mainstream usage. So, what design considerations will we need to take into account?

    Effective Stylus Design for the Mobile Computing Device

    In 2011, Dr. Daria Loi, user experience innovation manager in Intel’s PC Client Group, conducted a study on how users interact with touch-enabled clamshell devices running the Windows* 8 mobile operating system. This research provided quite a few counterintuitive insights, such as the general acceptance of a touch screen deployed to a clamshell laptop computer. Based on this research, engineers at Intel were encouraged to move forward with a general release of touch screens integrated with standard Ultrabook devices, with much success.

    Famously, Steve Jobs of Apple Computer noted that the general public will reject a vertical touch screen because of the effort required to lift your hand and arm forward to the screen. This so-called “gorilla-arm” position was not observed in practice during the study. Instead, users rested their hands on the sides of the screen, with their elbows on the table surface or alongside their bodies. In some cases, the user would even rest one hand on the top of the screen and use the thumb to scroll the screen! So the arguments against touch interactions on a vertical screen do not seem to hold true based on direct observation. As Dr. Loi stated, “They basically told me, ‘Nobody’s obliging me to be on the mouse for 8 hours in a row. Nobody’s obliging me to lift my arm to touch the screen for 8 hours in a row. I am in charge. I do what I want. Here, you give me one extra option.’”

    Early in 2012, Dr. Loi conducted another user study focused on Windows 8 usage on multiple form factors, some of which were equipped with a stylus in addition to the standard touch-enabled screen. She observed that the users had a different approach to controlling the Windows 8 software when a stylus was available. These observations sparked a series of follow-up research questions:

    • How will users respond to the introduction of a stylus into the Ultrabook computing environment?
    • What type of stylus technology would be best (e.g., passive or active)?
    • Which elements of the operating system enhance or detract from the stylus user experience?


     Figure 1:Tablet with an active stylus.

    Given the discussion above, there is clear value in the use of a stylus in a computing environment, but how well will that translate into a laptop situation? What design considerations should be made to accommodate this modality?

    Dr. Loi and team decided to pursue this question in a similar way to the 2011 study—that is, in multiple markets, prompting users to perform tasks with Ultrabook devices, and observing and recording their actions. This design method, based on direct interaction with systems instead of an indirect method such as a questionnaire or interview, greatly contributed to how senior executives responded when research results from the 2011 study on touch were shared. As Dr. Loi notes, “I found myself sitting in meetings with senior executives from different companies and literally seeing ‘aha’ moments on their faces. I would show them the research results, and then I had a five-minute video of users I interviewed telling what they thought about touch on a clamshell device.”

    So after the 2011 touch study and the 2012 Windows 8 study, a new hands-on, international study was organized in three locations (the United States, the United Kingdom, and China), with a focus on stylus use. These locations were selected for specific reasons based on market and cultural differences in each location. For example, in China, the style of writing is different in both form and character construction—more like what would be considered calligraphy in the West. Therefore, they have a different response to using a stylus. “As a user, you really need to be able to use and try a device in practice. As a researcher, you need to be next to the person, observe his or her behavior, and ask questions based on what you observe. It’s really behavior- and observation-driven research, very practical,” says Dr. Loi.

    The research approach was divided into two parts. The first part used a passive stylus, and the second part used the active stylus technology. A passive stylus is one that reacts with a modern touch screen in much the same way as your finger—that is, via capacitance. This stylus form has a blunt tip and uses existing touch screens. By contrast, an active stylus requires that an extra physical responsive layer be added to the screen. This technology provides a stylus that is much more pressure sensitive and has a smaller, harder tip. Each of these stylus forms was seen to have advantages and drawbacks for the different user groups as they were prompted to execute a specific set of computing tasks. Users were provided with a varied set of interaction tools, including touch screens, active and passive styluses, and touch pads, and allowed to explore multiple input mechanisms. Users’ behaviors and choices were carefully recorded, with some surprising results (see Table 1).

    Table 1. Summary of key findings from studies conducted by Dr. Loi

    FindingConsequence

    When using a stylus, the palm was often held on the screen to provide support.

    Palm rejection of extraneous touch events that occur against the screen was an issue.

    Passive stylus users applied more pressure than active stylus users.

    Device tipping can result from extra pressure against the touch screen if a passive stylus is used.

    Users did not complain about arm lifting to touch the screen.

    Occasional arm lifting and reaching to touch the screen is as acceptable as mouse use (which was also noted to cause discomfort).

    The active stylus was preferred over the passive stylus.

    The active, pressure-sensitive stylus was preferred for its accuracy of line and motion. However, some users liked that feel of a soft-tip, passive stylus.

    Users preferred multiple interaction options.

    Stylus, touch pad, touch screen, and mouse were all used interchangeably as the needs of the user dictated; personal preference was a strong motivator.

    Users liked the ability to take direct control over system behavior.

    Users felt more in control of the device when provided with an active stylus and touch-sensitive screen.

    Users strongly preferred personalization options.

    Users enjoyed selecting a stylus that best fit their personal needs (weight, balance, surface finish, pointing tip, etc.).

    With touch interactions, users showed no hand preference.

    As opposed to typical mouse use (which tends to drive selecting one hand), touch users switched interacting hands freely.

    If a stylus is provided with the device, it must be integrated into the design.

    The stylus must be “garaged” in the body of the device.

    Different cultures respond in unique ways to the introduction of stylus technology.

    Cultural differences, such as responding to the sound when an active stylus is used on a screen, will have a dramatic effect on utilization and acceptance.

    One key difference between touch-based interactions and stylus-based interactions was the innate tendency to brace the writing hand against the screen. This is much the same behavior you find when writing on paper: The fine motor skills required to hold and manipulate a pen accurately require the larger arm muscles to be relaxed. So, stylus users attempting to write on a screen had to anchor their arm (elbow on table surface or braced against the body) and their palm to use the stylus effectively. Without the ability to rest the user’s palm against the screen, a natural human tendency is prevented, leading to frustration and dissatisfaction. The consequence for a touch-sensitive screen is the absolute necessity to engineer palm-rejection algorithms into the hardware sensors.

    Another key finding was that acceptance of stylus-based input was driven by personal preference for the form factor. Users tended to be specific on certain physical aspects of the stylus, such as the surface finish, weight, and tip construction. For example, with the passive stylus, out of 15 different models, users tended toward just two types, including the Wacom Bamboo stylus, based on the finish and heft, which matched a high-quality standard pen. A link to a review of the top passive styluses can be found in the section, “For More Information.”

    As Dr. Loi noted, “I was impressed by how specific they were with design recommendations around the stylus. They were very precise in articulating why they liked one versus the other and what they expected to be the ideal stylus. They were talking very specifically about weight balance, proportions, size, finishes, look, and feel. Many people also commented about different tips, to be able to interchange and add different kinds of tips.”

    Along with storage preferences, many subjects strongly recommended integrating the stylus directly into the body of the device. They did not want the stylus to be easy to lose or to have to carry two devices, like they have to do with a mouse. They also wanted a stylus that matched the design characteristics of the associated device. The overall design of the stylus had to be such that it isn’t considered an “afterthought,” but instead integral to the industrial design concept of the device. As Dr. Loi noted, “They really wanted something that has the same kind of elegance or quality or look and feel of the device that they choose to purchase.”

    Along with these findings, additional surprises came out of the research. A strong sense of familiarity was expressed when using the stylus, as opposed to the learned behavior of a mouse. The direct, tactile sense that holding a pen and writing produces was a pleasant surprise to many, especially given that we have moved so far away from handwritten notes. Email, text messages, Tweets, ubiquitous cell phone coverage—all have combined to “depersonalize” our interactions with one another. The reintroduction of direct handwritten notes was seen to add a more human, personal touch to the communications. Technology users are looking for something both practical and expressive over which they have complete control.

    Another unexpected finding was the importance of sound when using the stylus. The key here was that the passive stylus had a soft, broad tip—essentially like a tiny finger tip. The active stylus, by contrast, had a solid tip that is pressure sensitive. This means that there was a distinct “click” sound when users touched the stylus to the screen. In some locations, such as Europe and the United States, this was seen as a positive feedback that solid contact had been made. In other places, such as China, users expressed irritation over the sound and even concern that the tool was possibly damaging the screen. Clearly, such cultural differences must be considered when designing a stylus for general use.

    So, why now? The stylus as a method of computer interaction has been around for decades. Why is there a resurgence in popularity for this age-old tool now? Well, partially it is because we are only now developing the necessary computing ecosystems in terms of touch-centric operating systems and stylus-enabled applications that will allow users the freedom to choose the most effective method of interaction. As the Chinese test groups noted, the previous types of stylus were those associated with old-style PDAs—thin, cheaply made, and easy to lose. This was considered outdated technology and therefore of no interest to them.

    However, with the advent of sensitive, touch-enabled screens and the software to take effective advantage of finger-based control, the market is ready to accept using stylus-enabled devices. With an active, pressure-sensitive stylus, the level of adoption is poised to become as high as was the touch-enabled handheld mobile computer (e.g., Apple iPhone*, Google Android*, and associated tablets). The analogy is similar to the up-surge in e-Books—previous attempts failed because the marketplace was not yet ready. Now, it is: “We were not ready technologically. We were not ready from a communication perspective, and we were not ready from an interaction perspective as a culture. This is why, now, we’ve got Windows 8, we’ve got touch screens that are all over the place, we’ve got millions of applications. It’s a different planet.”

    The final set of observations are centered around how the application software responded to the presence of a stylus and how system users were disappointed not to be able to do all of the things you would expect when holding a pen. For example, touch-enabled operating systems, such as Apple iOS*, Android, or Windows 8, support navigation with a pen (e.g., button clicks, swipe moves) but do not directly support handwriting or character recognition. In fact, the version of Microsoft Office 2013 used in the testing was partly touch enabled, but users were surprised that they couldn’t just write on the screen with the stylus as they expected to. This disconnect between expectations of system function and reality will place a major limiting factor on acceptance of stylus technology. As Dr. Loi noted, “They would look at me and say, ‘Why doesn’t it work?’ And I would say, ‘Well, it hasn’t been implemented. You can’t quite do that.’ They were like, ‘Why?’ Which is a very, very good question: Why?”

    Lessons Learned
    Recommendations for industrial design of stylus-enabled devices included:

    • To use a stylus effectively on a flat surface—vertical or horizontal—it is necessary to engineer palm-rejection algorithms into the hardware touch sensors.
    • An active stylus was preferred over passive for its accuracy and responsiveness, but users liked the tactile sensation that the softer tip of a passive stylus against the screen provided.
    • Users strongly prefer multiple interaction options and the ability to move freely between all forms of touch, stylus, or mouse-based navigation.
    • A sense of ownership and direct control over the computer was a strong motivator in adoption of the stylus.
    • Personalization—the ability to select different materials, finishes, and pointing tips—was also shown to be a driving factor in adoption of the stylus.
    • The stylus must be directly integrated with the body of the device and match the look and feel of that device.
    • Unexpected user cultural differences, such as response to sound, will have a dramatic effect on the acceptance of stylus technology.

    Human Interaction with Our Computing Devices

    As this article has shown, in an increasingly mobile computing environment, there is a need for better control over those devices. The rediscovery of the stylus as a pointing device provides for fine-grained, direct control over a touch-enabled device. Contrary to some opinions in the industry, this study showed that not only were people accepting of a stylus-based input device, but they actively preferred it for certain applications. The attitudes that prevent earlier adoption are giving way to innovation, such as the introduction of tactile (hepatic) feedback.

    More and more, computer users are looking for the seamless incorporation of all forms of system interaction, from direct screen touch to stylus, keyboard, mouse, and touch pad. Each form offers advantages to the user, and as the computing world meshes more deeply with the real world, this approach allows enhanced control over both.

    One of the biggest hurdles to overcome is the current lack of touch- and stylus-enabled software. Application developers are lagging behind the technological advances, with dissatisfaction the result. Fortunately, a number of application developers have recognized this need and have begun to direct attention toward stylus support. For example, applications have been developed to refocus data input software for handwriting recognition. Applications such as Penultimate*, Evernote*, and Springpad* all accept handwriting directly into the application, with the ability to convert written text to digital text. Going forward, application developers will improve on the past and find novel ways to use the stylus, leading to wider adoption in the marketplace.

    Design teams should be highly encouraged by the results of this study. It is clear that direct interaction with users is the best way to learn about how a technology will be used in practice. System users are not one homogeneous group. They are all individuals. They want personalized control over their technology. The technology must adapt to them, not they to the technology.

    As Dr. Koi noted, “People want excitement. They want passion. They want the right thing, and that’s what we should do. The only way is for [the development community] to be exposed to the reality of everyday users.”

    For More Information

    • Matthew Baxter-Reynolds. (July 2012). “The Human Touch: Building Ultrabook Applications in a Post-PC Age.” Intel Research Article.
    • Seamus Bellamy. (May 18, 2012.). “Roundup: The Best Stylus for iPad and Android Tablets.” TabTimes.http://tabtimes.com/review/ittech-accessories/2012/05/18/roundup-best-stylus-ipad-or-android-tablets.
    • Min Lin, Kathleen J. Price, Rich Goldman, & Andrew Sears. (2005). “Tapping on the Move: Fitt’s Law Under Mobile Conditions.” Managing Modern Organizations Through Information Technology, Proceedings of the 2005 Information Resources Mgmt. Assoc. Internat. Conf. Idea Group, Inc.
    • Koji Yatani & Khai N. Truong. (2009). “An Evaluation of Stylus-based Text Entry Methods on Handheld Devices Studied Under Different User Mobility States.” Pervasive and Mobile Computing, 5, pp. 496–508.
    • Suranjit Adhikari. (2012). “Haptic Device for Position Detection,” U.S. Patent Application, Pub. No. US2012/0293464. Submitted November 22, 2012.
    • M.R. Davis & T.O. Ellis (1964). “The RAND Tablet—A Man Machine Graphical Communication Device.” Memorandum to the Advanced Research Program Agency, U.S. Department of Defense.

    About the Author

    Ben Lieberman holds a Ph.D. in biophysics and genetics from the University of Colorado, Health Sciences Center. Dr. Lieberman serves as principal architect for BioLogic Software Consulting, bringing more than 15 years of software architecture and IT experience in various fields, including telecommunications, airline travel, e-commerce, government, financial services, and the life sciences. Dr. Lieberman bases his consulting services on the best practices of software development, with specialization in object-oriented architectures and distributed computing—in particular, Java*-based systems and distributed website development, XML/XSLT, Perl, and C++-based client–server systems. Dr. Lieberman has provided architectural services to many corporate organizations, including Comcast, Cricket, EchoStar, Jones Cyber Solutions, Blueprint Technologies, Trip Network Inc., and Cendant Corp.; educational institutions, including Duke University and the University of Colorado; and governmental agencies, including the U.S. Department of Labor, Mine Safety and Health Administration and the U.S. Department of Defense, Military Health Service. He is also an accomplished professional writer with a book (The Art of Software Modeling, Benjamin A. Auerbach Publications, 2007), numerous software-related articles, and a series of IBM corporate technology newsletters to his credit.

    Intel, the Intel logo, and Ultrabook are trademarks of Intel Corporation in the US and/or other countries.

    Copyright © 2013 Intel Corporation. All rights reserved.

    *Other names and brands may be claimed as the property of others.

  • ultrabook
  • Windows 8*
  • Windows* Marketplace
  • touch
  • mobile
  • Computing World
  • Computing device
  • table
  • active stylus
  • Developers
  • Microsoft Windows* 8
  • Windows*
  • Beginner
  • Touch Interfaces
  • User Experience and Design
  • URL

  • Windows* 8 OS Tutorial: Writing a Multithreaded Application for the Windows Store* using Intel® Threading Building Blocks - now with DLLs.

    $
    0
    0

    This article explains how to build a simple application for the Windows Store* using Intel Threading Building Blocks (Intel® TBB).

    A previous post: “Windows* 8 Tutorial: Writing a Multithreaded Application for the Windows Store* using Intel® Threading Building Blocks”, discusses experimental support for Windows 8 Store applications.  Now  Intel TBB 4.1 update 3 contains binaries for this as well as a corresponding tbb41_20130314oss stable release.

    To make a simple app, create new project  Blank App (XAML) using the default template Visual C++> Windows Store. that the remainder of this tutorial uses  tbbSample0321 as the project name. 

    Add a couple of buttons to the main page (tbbSample0321.MainPage class). After adding these, the XAML file of the page will look like

    <Page
        x:Class="tbbSample0321.MainPage"    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"    xmlns:local="using:tbbSample0321"    xmlns:d="http://schemas.microsoft.com/expression/blend/2008"    xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"    mc:Ignorable="d">
        <Grid Background="{StaticResource ApplicationPageBackgroundThemeBrush}">
            <Button Name="SR" Margin="167,262,0,406" Height="100" Width="300" Content="Press to run Simple Reduction" Click="SR_Click"></Button>
            <Button Name="DR" Margin="559,262,0,406" Height="100" Width="300" Content="Press to run Determenistic Reduction" Click="DR_Click"></Button>   
         </Grid>
    </Page>
    

    And add declarations of methods that process button clicks to main page header file (MainPage.xaml.h)

    #pragma once
    #include "MainPage.g.h"
    namespace tbbSample0321
    {
     public ref class MainPage sealed
     {
     public:
     MainPage();
     protected:
     virtual void OnNavigatedTo(Windows::UI::Xaml::Navigation::NavigationEventArgs^ e) override;
     private:
     void SR_Click(Platform::Object^ sender, Windows::UI::Xaml::RoutedEventArgs^ e);
     void DR_Click(Platform::Object^ sender, Windows::UI::Xaml::RoutedEventArgs^ e);
     };
    }

    Next, add Intel TBB library functions in the handlers for the buttons. As an example, use reduction (tbb::parallel_reduce) and deterministic reduction (tbb::parallel_deterministic_reduce) algorithms.  To do so, add the following code to the main page source file MainPage.xaml.cpp: 

    #include "tbb/tbb.h"
    void tbbSample0321::MainPage::SR_Click(Platform::Object^ sender, Windows::UI::Xaml::RoutedEventArgs^ e)
    {
        int N=100000000;
        float fr = 1.0f/(float)N;
        float sum = tbb::parallel_reduce(
            tbb::blocked_range<int>(0,N), 0.0f,
            [=](const tbb::blocked_range<int>& r, float sum)->float {
                for( int i=r.begin(); i!=r.end(); ++i )
                    sum += fr;
                return sum;
        },
            []( float x, float y )->float {
                return x+y;
        }
        ); 
        SR->Content="Press to run Simple ReductionnThe answer is " + sum.ToString();
    }
    
    
    void tbbSample0321::MainPage::DR_Click(Platform::Object^ sender, Windows::UI::Xaml::RoutedEventArgs^ e)
    {
        int N=100000000;
        float fr = 1.0f/(float)N;
        float sum = tbb::parallel_deterministic_reduce(
            tbb::blocked_range<int>(0,N), 0.0f,
            [=](const tbb::blocked_range<int>& r, float sum)->float {
                for( int i=r.begin(); i!=r.end(); ++i )
                    sum += fr;
                return sum;
        },
            []( float x, float y )->float {
                return x+y;
        }
        ); 
        DR->Content="Press to run Deterministic ReductionnThe answer is " + sum.ToString();
    }
    

    Then configure Intel TBB in the project property page
    From Visual Studio, go to Project > Properties > Intel Performance Libraries, and set Use TBB to Yes:

    If you use an open source package, add the folder <TBB_folder>/include to the project properties Additional Include Directories and add the folder that contains the tbb.lib library file to Additional Library Directories.  

    Next add the tbb.dll and tbbmalloc.dll libraries to the application container. In order to do this, add files to the project via Project> Add Existing Item…

    and set the Content property to Yes. In this case, the files will be copied to container (AppX) and then can be loaded during application launch as well as on demand.

    That’s it! This simple application is ready and should be a good start towards writing a more complex parallel application for the Windows Store using Intel® Threading Building Blocks.

    * Other names and brands may be claimed as the property of others

  • Intel Parallel Studio
  • intel threading building blocks
  • Intel TBB
  • tbb
  • C++
  • C++11
  • Developers
  • Microsoft Windows* 8
  • Windows*
  • C/C++
  • Intel® Threading Building Blocks
  • Intel® C++ Studio XE
  • Microsoft Windows* 8 Style UI
  • Open Source
  • Parallel Computing
  • URL
  • Code Sample
  • Compiler Topics
  • Libraries
  • Multithread development
  • Learning Lab
  • Developing for High DPI Applications for Windows* 8

    $
    0
    0

    Chris Phlipot
    March 14 2013

    Download Article


     Windows DPI Scaling.pdf (336.59 KB)

    In 2013, the industry is moving towards using high resolution, high DPI screens on Windows devices. Today, most tablets have 1366x768 resolution screens between 10 and 11 inches. Soon, however, you’ll be seeing devices with the same size screens with 1080p and 1440p resolution, resulting in a much higher DPI. Microsoft’s Surface* Pro tablet is one such device, utilizing a 1080p 10.6” screen. Windows is designed to scale existing applications according to the screen resolution, but developers still need to do a few things, depending on the type of application (desktop or Windows Store), to make sure their applications scale properly.

    DPI Scaling on the Desktop


    Windows has different methods of scaling desktop applications from Windows Store apps.  For the Windows desktop, there are 4 different levels of scaling: 100%, 125%, 150%, and 200%. For reference, Microsoft’s Surface Pro uses a 1080p 10.6” display and by default, sets its desktop scaling to 150%.

    To take full advantage of DPI scaling, you need to make your application DPI aware. If not, Windows will automatically scale the application to the appropriate size; however, the application will appear blurry (see figure 1 below). Declaring your application to be DPI aware is the best way to ensure your applications are scaled in such a way that images and text remain crisp and sharp when scaled to a higher DPI.


    Figure 1. 200% scaling.  Internet Explorer* (left) is DPI aware. Chrome* (right) is not.

    When making an application DPI aware, you must make sure that your application’s UI scales appropriately. This involves providing higher resolution assets, as well as checking that the text and text containers are scaled appropriately.  It is recommended that you test your applications under high DPI situations. The easiest way to do this is by changing the display scaling settings in the Control Panel display settings, as shown in figure 2 below.


    Figure 2. DPI scaling for testing desktop applications can be changed in the Windows* Control Panel.

    For more information, refer to Microsoft’s MSDN documentation on high DPI desktop applications: http://msdn.microsoft.com/en-us/library/dd464646.aspx

    Scaling on Windows Store Apps


    Windows Store apps handle scaling differently. All applications are automatically scaled to the proper size. Unlike on the desktop, the Windows 8 UI supports 100%, 140%, and 180% scaling modes. Text, images, and the rest of the UI are automatically scaled to the proper size on displays with high DPI, without any interaction from the developer.

    However, for the application to look its best, you need to provide higher resolution images for when the 140% and 180% scaling modes are used.  An alternative is to use scalable vector graphics, in which case Windows will automatically scale the images to the proper resolution, while preserving image quality.

    To ensure the application is rendered properly at different DPIs, it is recommended that you test your applications using the simulator in Visual Studio* 2012. You can then test different DPI settings by changing the simulated screen size and resolution (see figures 3 & 4).


    Figure 3. The simulator can be invoked from within Visual Studio* 2012 on Windows* 8


    Figure 4. DPI can be changed by selecting a different screen size and resolution from the settings on the right-hand side of the simulator.

    For more information on enabling display scaling for Windows Store apps, see Microsoft’s MSDN article: http://msdn.microsoft.com/en-us/library/windows/apps/hh465362.aspx

    Summary


    It is recommended that you test your applications in high DPI environments to ensure that they function properly and are rendered with the proper visual clarity. Desktop applications can be tested by changing the display scaling in the Control Panel, while Windows Store Apps can be tested using the simulator. If you encounter any issues when running at higher DPIs, refer to Microsoft’s MSDN documentation.

  • Tablet
  • ultrabook
  • Windows* 8
  • Windows store
  • Apps
  • Windows desktop
  • High DPI
  • high resolution
  • screen resolution
  • Developers
  • Microsoft Windows* 8
  • Windows*
  • Microsoft Windows* 8 Desktop
  • Microsoft Windows* 8 Style UI
  • User Experience and Design
  • URL
  • Setting Text Size for Intel® Parallel Studio XE Components on Microsoft Windows* Operating Systems

    $
    0
    0

    Intel® Inspector XE, Intel® VTune™ Amplifier XE, and Intel® Advisor XE use Microsoft Windows* OS system fonts to select text size. This allows the components to automatically pick up any change you make for accessibility or personal preference without affecting any other component user on the system.

     Therefore, changing the text size for the Intel® Parallel Studio XE GUI is simple. Just change the Windows* text size and reopen the component.

     Here are a few ways, based on using the Windows 7* OS, to change the text size:

    ·        Right click anywhere on the screen. Select Personalize >Display and use the radio button to enlarge text.

     

     

     

    ·        From the same dialog box, click Set custom text size (DPI) to get a more precise way of setting text size.

     

     

    ·        Right click anywhere on the screen, select Personalize > Window color > Advanced appearance settings to adjust the text size for specific parts of a view, such as window title, window text, or message box text. You can also use this dialog box to change color settings for accessibility purposes or user preference.

     

     

    These examples are shown in the Windows 7* OS, but there are similar dialog boxes in all Windows* OS versions.

     Recommendation: Intel Parallel Studio XE components should take most of the updated text sizes even while they are running. Nevertheless, it is better to restart the components.

  • Developers
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Windows*
  • .NET*
  • C#
  • C/C++
  • Fortran
  • Java*
  • Beginner
  • Intel® Cluster Studio XE
  • Intel® Parallel Studio XE
  • Intel® Advisor XE
  • Intel® VTune™ Amplifier XE
  • Intel® Inspector XE
  • URL
  • Making the Call Stack Pane Work for You

    $
    0
    0


    VTune™ Amplifier XE Call Stack Pane

    The call stack pane is displayed in viewpoints of analysis types that include stack data. It is displayed on the right side of the viewpoint (highlighted below in a gold box).

    call stack pane highlighted in viewpoint

    The call stack pane identifies the calling sequences to the selected function, in order of contribution to the total time for the selected function.  Call stacks from different threads are aggregated together, showing all the call stacks for a function, without providing information on what threads were calling.  See the product documentation for more details.

    Some users have found that the call stack pane is difficult to read and decipher.  The VTune™ Amplifier XE supports configuring the data that is included in the call stack pane via a context menu.  By default, the binary filename, function name, byte offset to the call site within the function, source filename and line number are displayed (when available).  For example,

    call stack pane with all info

    The context menu allows the user to exclude some or all of this information and to change the formatting.

    call stack pane context menu

    Unchecking "Show Modules" and excluding modules, or binary filenames, results in the following display:

    call stack pane no module

    By default, the call stack pane attempts to display all information on one line ("One-line Mode").  Turning off this option, results in the information being displayed on two lines, as in the following display:

    call stack pane two line mode

    The display looks like the following, when excluding the source filename and line number:

    call stack pane no src file line number 2

    Finally, removing the binary filename (e.g., "module") and source filename and line number, results in a call stack pane display with only the function name and offset to the call site.

    call stack pane no src or line number

    Additionally, the context menu provides a mechanism for copying the calling sequencing to the clipboard for pasting into other applications.  Here is an explain of data copied to the clipboard:

    clipboard small

    So, depending on your preference for viewing call stack information, you can configure the call stack pane to suit your needs.


  • performance profiler
  • Call Stack Pane
  • menu
  • Configure
  • Developers
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Server
  • Windows*
  • .NET*
  • C#
  • C/C++
  • Fortran
  • Java*
  • Beginner
  • Intel® C++ Studio XE
  • Intel® Cluster Studio XE
  • Intel® VTune™ Amplifier XE
  • Development Tools
  • Optimization
  • URL
  • Getting started
  • Advanced Touch Gestures API Overview, from iOS* to Windows* 8 Store Apps

    $
    0
    0

    Download Article

    Download Advanced Touch Gestures API Overview, from iOS* to Windows* 8 Store Apps [PDF 351KB]

    Objective

    Developers looking to port their existing iOS* apps to Windows* 8 Store Apps face several challenges.  One of these challenges is porting existing touch detection code. In this article, we use a simple photo viewer as the preexisting iOS application, and we create a similar app on the Windows 8 side. Other application design models such as games are not discussed here.

    We will also provide an overview of the touch API differences between the two platforms and show you how to port your apps. Specifically, we discuss how to port tap, swipe, pinch/zoom, and rotation gestures across the platforms. C# is used as the Windows 8 programming language.

    Table of Contents

    1. Introduction
    2. Starting with a Completed iOS* Touch App
    3. A Touch API Mapping Table (High Level)
    4. Porting Tap Gestures
    5. Porting Swipe Gestures
    6. Porting Pinch/Zoom Gestures
    7. Porting Rotation Gestures
    8. Summary
    9. Appendix A

    1. Introduction

    Ultrabook™ devices, tablets, phones, and other touch-enabled devices have emerged in the mobile computing market. These devices support an innovative software usage model: apps that respond to user touch input. Today, touch-enabled apps are commonplace; users can browse the web, make purchases, and do so many other things with the use of simple swipe and drag gestures. Of course, this is all made possible with great application design practices in mind.

    This article provides an overview of porting preexisting iOS touch code to Windows 8. From a design perspective, developers should be cognizant of the fact that touch gestures are indistinguishable across platforms. For example, if a user swipes an iPad* screen or performs a two finger rotation gesture, the gestures would be performed the same way on an Ultrabook device. OS and application responses to touch (such as Charms in Windows 8) may differ across platforms, and we will describe these in detail below.

    While the primary programming language for iOS apps is Objective-C*, Windows 8 offers several options such as Visual Basic*, C#, C++, etc. Here, the programming language of choice will be C#. While several gestures exist, this article covers porting guidelines for the following: swipe, pinch/zoom, and rotation. At a high level, the end user doesn't perceive these gestures to be different when comparing iOS to Windows 8 apps, but programmatically, there are quite a few differences. The details are discussed below.

    2. Starting with a Completed iOS Touch App

    This article assumes that you are porting a preexisting iOS app to Windows 8. For example, let's call it PhotoGesture. A screenshot is presented below:



    Figure 2.1: Sample iOS* app (Photo source: Xcode* Simulator)

    The bottom button toggles the gesture detection mode. The available modes are: rotation, pinch/zoom, and swipe. For swipe, the text to the right informs the user about the direction of the last swipe detected (up, down, left, right). Thus, not only does this app support single finger swipe, but also two finger rotation and two finger pinch/zoom as well.

    This article assumes that you are already familiar with creating an app as shown above. In case you need a primer, here are the prerequisites for this article:

    3. A Touch API Mapping Table (High level)

    In Table 3.1 we show a high level comparison between iOS and Windows 8 touch APIs. The table isn't all inclusive. For more information, refer to the previous section for more iOS APIs or below for links pertaining to Windows 8.

    Table 3.1: Touch API Mapping Table (High Level)

    Gesture(s)API Family (iOS)API Family (Windows 8)Description
    Single TapAction, OutletClick, OnTapped (XAML), pointer events+, manipulation events+The most basic gesture considered to be a discrete event
    SwipeUISwipeGestureRecognizerPointer events, manipulation events+Considered a sequence of press, drag, release actions
    Two finger rotationUIRotationGestureRecognizerManipulation eventsMulti touch gesture
    Two-finger pinch/zoomUIPinchGestureRecognizerManipulation eventsAnother multi touch gesture

    +: Optional

    The following Windows 8 guide provides the framework for all touch APIs that will be discussed in the remainder of this article:

    http://msdn.microsoft.com/en-us/library/windows/apps/xaml/hh465387.aspx

    This guide provides great supplemental information on Windows 8 touch by providing an example children's math game implementation:

    http://software.intel.com/sites/default/files/m/9/0/5/0/5/44966-Enabling_Touch_in_Windows_8_with_C.pdf

    Single tap and swipe gestures only require one finger and are thus easier to implement. For single finger tap, iOS uses both target-action and outlet design patterns. The practice is to use one or both depending on the UI element semantics. For example, a button only needs an action, whereas a textbox would use an outlet. Correspondingly, on the Windows 8 side, there are three ways of handling single tap as shown in the table above. The easiest way is to use a pre-defined XAML keyword such as onTapped. Pointer events can also be used with finger press and release corresponding to separate event triggers, although they are not needed for simple tap detection. Manipulation event implementation is similar to implementing pointer events in the most basic form of press/release. The press would correspond to ManipulationStarted while the release would correspond to ManipulationCompleted. Of course, this isn't required for handling simple tap events.

    For swipe, note that it's not discrete since dragging is continuous. iOS has a predefined UISwipeGestureRecognizer class for handling this. For Windows 8, a single pre-defined XAML keyword for tap detection won't suffice. At a minimum, we need two or more XAML keywords utilizing either pointer events or at a deeper level, manipulation events. Refer to this linkas it provides XAML keywords for pointer events such as PointerPressed or PointerReleased:

    http://msdn.microsoft.com/en-us/library/windows/apps/xaml/hh465387.aspx

    Similarly, manipulation events have keywords.

    A note regarding pointer events: don’t assume that when a pointer event completes that it fires PointerPressed. There are many factors that govern which event fires such as which device is being used. This will be discussed in the "Porting Swipe Gestures" section below. For now, please refer to this link for more information:

    http://msdn.microsoft.com/en-us/library/windows/apps/windows.ui.xaml.uielement.pointerreleased

    Windows 8 Manipulation events are considered to be the most advanced APIs in that we must use these for handling multi touch gestures. While the table above provides predefined multi-touch APIs for detecting rotation and pinch/zoom in iOS, we get to handle these manually with manipulation events in Windows 8. This is discussed in more detail below.

    The following sections dive into the porting exercises. A sample Windows 8 app, SensorDemo, is used in conjunction with PhotoGesture mentioned above. For iOS, the storyboard designer is used in the code sections below.

    4. Porting Tap Gestures

    In our example, the tap gesture is for the bottom button shown in Figure 1.1. For iOS, designing a button starts with the storyboard. After setting up the target action for the button, the corresponding view controller header file's code for the iOS side looks like the following:

    //for button that changes selected gestures
    - (IBAction)modeChanged:(UIButton *)sender;	
    

    Figure 4.1: Button for Mode Selection ++++

    When the user clicks the button, the following implementation in the .m file handles the event:

    - (IBAction)modeChanged:(UIButton *)sender {
    
    	//handler code here…    
    }  
    

    Figure 4.2: Mode Change Handler ++++

    On the Windows 8 C# side, design begins with XAML. The developer opens the Toolbox and then drags a button into the design pane. The framework then auto-populates the .xaml file with the button specification. In this example, the click keyword is then manually added to the XAML code in order to specify the handler that manages button click/tap events:

    <Button x:Name="btnMode" Content="Mode Button" HorizontalAlignment="Left" Height="66" 
    VerticalAlignment="Top" Width="270" Click="ButtonToggleFiltering" Foreground="White" 
    Background="RoyalBlue" Canvas.Left="10" Canvas.Top="692" Margin="10,692,0,0" 
    Style="{StaticResource MyButtonsStyle}"/>
    

    Figure 4.3: Button Specification in XAML++

    Assuming the XAML file name is file.xaml, you can then add the click handler code to file.xaml.cs as follows:

    private void ButtonToggleFiltering(object sender, RoutedEventArgs e)
            {
                //handler code here
    
            }
    

    Figure 4.4: Click Handler +++

    5. Porting Swipe Gestures

    The following screen shot is taken from the iOS sample app side:



    Figure 5.1: Four Swipe Gesture Recognizers (Photo source: XCode*)

    Four distinct swipe gesture recognizers are used in iOS since any given swipe gesture recognizer can only detect at most one swipe direction. Thus, the collection of recognizers is used here to detect the up, down, left, and right swipe directions.

    Here is a snippet from the iOS view controller header file:

    //we use one swipe gesture instance per direction we wish to detect
    //since one recognizer instance can only detect one direction of
    //choice
    - (IBAction)onSwipeUp:(UISwipeGestureRecognizer *)sender;
    - (IBAction)onSwipeLeft:(UISwipeGestureRecognizer *)sender;
    - (IBAction)onSwipeRight:(UISwipeGestureRecognizer *)sender;
    - (IBAction)onSwipeDown:(UISwipeGestureRecognizer *)sender;
    

    Figure 5.2: Four Swipe Directions ++++

    Then, the implementation for one of the handlers looks like the following in the .m file:

    - (IBAction)onSwipeUp:(UISwipeGestureRecognizer *)sender {
        
        if(mode == 2)
        {
            direction = sender.direction;
            
            if(direction == UISwipeGestureRecognizerDirectionUp)
                last_swipe.text = @"Up";
        }  
    }
    

    Figure 5.3: One of the Handlers ++++

    Notice how, like the previous example, a "sender" object is used, and in this case, the direction property allows for proper swipe detection.

    While four swipe recognizers were needed on the iOS side, this isn't the case for Windows 8. Once again, we start with the XAML design. In this sample, pointer events are used. Recall that in section 3, a note of caution is made as to which keywords to use. Since events such as PointerPressed aren't always guaranteed to fire upon swipe, here, we simply use all possible keywords (highlighted below) for handling pointer events in a robust, platform-neutral way. This ensures that we don't miss handling events. It's up to you to determine which subset of keywords to use based on the platform at hand:

    <Image x:Name="imageToRotate" Source="Assets/Ultrabook-Arrow.png" 
    HorizontalAlignment="Center" Stretch="None" VerticalAlignment="Center" 
    ManipulationMode="All" ManipulationDelta=”manip_delta” 
    PointerEntered="pressed" PointerPressed="pressed" 
    PointerCanceled="released" PointerCaptureLost="released" 
    PointerReleased="released" PointerExited="released" 
    Height="682" Canvas.Top="10" Width="922" Canvas.Left="264" 
    Margin="250,-73,194,159"/>
    

    Figure 5.4: Pointer Event Specification in XAML ++

    In this sample, note that an image is used. Also, note that while multiple keywords are used, we share the handler name. There is no necessity to use a separate handler for each keyword, but it may be desired based on the application design.

    Here, manipulation mode was specified in a way that all manipulation types are detected. This can instead be limited to rotation, scale, etc. For the complete list of manipulation modes supported, refer to this link:

    http://msdn.microsoft.com/en-us/library/windows/apps/windows.ui.xaml.input.manipulationmodes

    For Windows 8, the pressed and released handlers are presented below:

    void pressed(object sender, PointerRoutedEventArgs e)
            {
                if (_currentSensorMode == SensorMode.TOUCH_SWIPE)
                {
                    begin_swipe_x = e.GetCurrentPoint(this.imageToRotate).Position.X;
                    begin_swipe_y = e.GetCurrentPoint(this.imageToRotate).Position.Y;
                }
            }
    
    
    
    void released(object sender, PointerRoutedEventArgs e)
            {
                if (_currentSensorMode == SensorMode.TOUCH_SWIPE)
                {
                    end_swipe_x = e.GetCurrentPoint(this.imageToRotate).Position.X;
                    end_swipe_y = e.GetCurrentPoint(this.imageToRotate).Position.Y;
    
                    
    		//let's determine if there was more of a coordinate change in the x direction or y direction to better
    		//choose one of four directions as feedback to the user who has swept across the screen
    
                    bool x_axis = false;
    
                    if(Math.Abs(Math.Floor(begin_swipe_x-end_swipe_x)) > Math.Abs(Math.Floor(begin_swipe_y-end_swipe_y)))
                        x_axis = true;
    
    		
                    if(x_axis && end_swipe_x - begin_swipe_x > MIN_THRESHOLD) 
                        swipe_status.Text = "RIGHT"; //swipe right
    
                    if(x_axis && begin_swipe_x - end_swipe_x > MIN_THRESHOLD) 
                        swipe_status.Text = "LEFT"; //swipe left
    
                    if(!x_axis && end_swipe_y - begin_swipe_y > MIN_THRESHOLD) 
                        swipe_status.Text = "DOWN"; //swipe down
    
                    if(!x_axis && begin_swipe_y - end_swipe_y > MIN_THRESHOLD) 
                        swipe_status.Text = "UP"; //swipe up
                }
            }
    

    Figure 5.5: Handler Code+++

    Compared to the previous section, the event argument has changed to PointerRoutedEventArgs. The code above first notes the touch coordinates when swiping begins, and then it captures the final coordinates when swiping completes. Using a threshold and axial direction, these two coordinate pairs are then compared to determine the swipe direction. The user must move more than MIN_THRESHOLD along any given axis to register a swipe.

    Notice how in the Windows 8 case, you have more flexibility and control for precisely detecting a swipe since the threshold can be specified, etc. Note also that four separate recognizers were not needed. There are also other properties that are easily accessible, such as velocity data.

    6. Porting Pinch/Zoom Gestures

    We now discuss the pinch/zoom recognizer on the iOS side. Officially, the recognizer is called the “Pinch Gesture Recognizer,” but we will refer to it as pinch/zoom gesture recognizer since the recognizer really handles both touch motions.



    Figure 6.1: Pinch / Zoom Gesture Recognizer (Photo source: XCode*)

    Here is the view controller code for the iOS side:

    - (IBAction)onPinch:(UIPinchGestureRecognizer *)sender;
    

    Figure 6.2: On Pinch ++++

    Here is the corresponding implementation code:

    //for pinch zoom
    
    CGFloat scale = 1.0; //used for pinch/zoom image scale
    CGFloat orig_width, orig_height; //original dimensions for image view
    CGFloat old_width, old_height; //used for resizing origin change
    CGFloat old_origin_x, old_origin_y; //previous origin of image
    ...
    
    
    - (IBAction)onPinch:(UIPinchGestureRecognizer *)sender {
        
        if(mode == 1)
        {
            //first time through, assuming image size > 0
            if(orig_width == 0 && orig_height == 0)
            {   fr = _img.frame; //the encompassing frame for our UIImageView
            
                orig_width = fr.size.width;
                orig_height = fr.size.height;
            }
            
            scale = sender.scale; //scale change
            fr = _img.frame;
    
           //if needed, refer to Appendix A for the details of calculating new origin and dimension of scaled image 
    
    	//calculate scale and origin here…
    
    	
        }
    }
    

    Figure 6.3: The Corresponding Implementation ++++

    In this sample code, pinch/zoom is performed on an image. The image frame is first obtained. Then, the sender scale property is read to determine by what scale the user has pinched or zoomed. Given the scale value provided as input into the event handler, we wish to compute the new (x,y) origin denoted by (fr.origin.x, fr.origin.y). For these additional details, please look through Appendix A.

    For the Windows 8 side, with no surprise, design starts with XAML (same as the above, but this time highlighting manipulation keyword for emphasis):

    <Image x:Name="imageToRotate" Source="Assets/Ultrabook-Arrow.png" 
    HorizontalAlignment="Center" Stretch="None" VerticalAlignment="Center" 
    ManipulationMode="All" ManipulationDelta=”manip_delta” 
    PointerEntered="pressed" PointerPressed="pressed" PointerCanceled="released" 
    PointerCaptureLost="released" PointerReleased="released" PointerExited="released" 
    Height="682" Canvas.Top="10" Width="922" Canvas.Left="264" Margin="250,-73,194,159"/>
    

    Figure 6.4: Manipulation Specification in XAML++

    On the Windows 8 side, since pinch/zoom doesn’t necessitate handling when the event starts or ends, we can simply treat it as a continuous event where manip_delta is continuously fired so long as the gesture continues. It is however perfectly acceptable to use the other manipulation phases as discussed in the links above.

    Here is the Windows 8 C# code for the routine:

    //moving image while holding down pointer
            void manip_delta(object sender, ManipulationDeltaRoutedEventArgs e)
            {
                …
    
                if (_currentSensorMode == SensorMode.TOUCH_PINCH)
                {
                    ScaleTransform tran = new ScaleTransform();
    
                    //scale in/out from center of image
                    tran.CenterX = imageToRotate.ActualWidth / 2;
                    tran.CenterY = imageToRotate.ActualHeight / 2;
    
                    tran.ScaleX = e.Cumulative.Scale;
                    tran.ScaleY = e.Cumulative.Scale;
    
                    //update the on-screen image using the transform
                    imageToRotate.RenderTransform = tran;
                }
            }
    

    Figure 6.5: Manipulation Delta Handler Code+++

    Once again, take note of the change for the event handler type. In this code example, since the transform origin is taken to be the center of the image, we don’t need to do any mathematical tricks to fixate the origin as we needed to in the iOS code above. The e.cumulative.Scale property accumulates the overall scale change for this gesture event so long as the user continues it. Thus, this is why it suffices to just use ManipulationDelta.

    7. Porting Rotation Gestures

    We now move on to the final porting exercise: porting rotation code. Here is the iOS snapshot:



    Figure 7.1: Rotation Gesture Recognizer (Photo source: XCode*)

    The iOS view controller header file code follows:

    - (IBAction)onRotation:(UIRotationGestureRecognizer *)sender;
    

    Figure 7.2: On Rotation ++++

    Here is the corresponding implementation in the .m file:

    //for rotation
    
    CGFloat angle = 0.0; //rotation angle for image
    CGFloat last_angle = 0.0; //orientation of image at end of last gesture
    
    …
    
    
    - (IBAction)onRotation:(UIRotationGestureRecognizer *)sender {
        
     if(mode == 0)
     {
        angle = sender.rotation;
         
        //time to apply rotation transform to the image
        CGAffineTransform transformer = CGAffineTransformMakeRotation(last_angle + angle);
        [_img setTransform:transformer];
              
         if(sender.state == UIGestureRecognizerStateEnded)
             last_angle = last_angle + angle;
         
     }
    }
    

    Figure 7.3: iOS Sample Implementation++++

    The purpose of last_angle is to allow the user to rotate the image some, and then upon rotating the image again, the starting orientation of the image is the ending orientation of the previous rotation event. This is done so that the image doesn’t appear to “jump” between separate rotation events. 

    Now for the C# side. Instead of adding the manipulation events via XAML, we show the alternative way you can specify the manipulation handlers: by using the C# code-behind method:

                imageToRotate.ManipulationStarted += manip_start;
                imageToRotate.ManipulationDelta += manip_delta;
    

    Figure 7.4: Specifying Manipulation Events Programmatically +++

    Notice that we continue to specify gesture handlers for the same image as in the previous sections. In this sample, the same “delta” routine is used as in the previous section. However, for the sake of exercise, we also use a separate handler for when the manipulation starts. Here is the rest of the code in the .cs file:

    //the start of the gesture event: touching the image
            void manip_start(object sender, ManipulationStartedRoutedEventArgs e)
            {
                //there may have been a previous touch event that rotated the
                //image....let's let the orientation of the image at the end
                //of previous event be the start for this one to avoid a 
                //frame jump to original upright orientation
    
                if (_currentSensorMode == SensorMode.TOUCH_ROTATE)
                {
                    RotateTransform tran = new RotateTransform();
                    tran.Angle = _curAngle;
    
                    //rotate about the center of the image
                    tran.CenterX = imageToRotate.ActualWidth / 2;
                    tran.CenterY = imageToRotate.ActualHeight / 2;
    
                    //update the on-screen image using the transform
                    imageToRotate.RenderTransform = tran;
                }
            }
    
            void manip_delta(object sender, ManipulationDeltaRoutedEventArgs e)
            {
                if (_currentSensorMode == SensorMode.TOUCH_ROTATE)
                {
                    //time to rotate the image!
                    RotateTransform tran = new RotateTransform();
    
                    _curAngle += e.Delta.Rotation;
    
                    tran.Angle = _curAngle;
    
                    //rotate about the center of the image
                    tran.CenterX = imageToRotate.ActualWidth / 2;
                    tran.CenterY = imageToRotate.ActualHeight / 2;
    
                    //update the on-screen image using the transform
                    imageToRotate.RenderTransform = tran;
                }
    …
    

    Figure 7.5: The Corresponding Windows 8 Handler Code+++

    Unlike the previous Windows 8 code example, here, we use e.Delta.Rotation. Rather than use a cumulative value to directly assign the orientation, we instead keep applying the small angular change that occurs every time the event is fired. The _curAngle variable then tracks the overall angular change relative to the upright (0 angle) orientation. Of course, you don’t need to handle the code this way, but we see that Windows 8 provides us with some great flexibility!

    8. Summary

    This article summarized the essential steps needed to port preexisting iOS touch code to the Windows 8 platform when handling tap, rotate, pinch/zoom, and swipe. The code examples for Windows 8 incorporated both XAML and C# design methodologies, and we showed that Windows 8 has flexibility in terms of the APIs that can be used to solve the porting challenges. We saw a case where handling swipe required only one recognizer for Windows 8. We also saw how with manipulation events, the same event handler can be shared among different gesture types with some logic added to distinguish them (e.g., the gesture mode we are in as shown in the code snippets above). Windows 8 thus allows touch gestures to be extended programmatically from other platforms like iOS to continue providing the end user with a rich user experience!

    9. Appendix A

    For scaling an image in iOS, the reason we need to compute the new location of a scaled image's top left corner is because when scaling the image, we want its center to remain fixed. Otherwise, by default, without doing the adjustment, the image will scale from its top left corner, rather than from the center. It was noted above that this adjustment is not needed when scaling images in Windows 8. Here is sample iOS code provided as a reference for calculating a scaled image's new position and dimensions.

    old_origin_x = fr.origin.x;
            old_origin_y = fr.origin.y;
            old_width = fr.size.width;
            old_height = fr.size.height;
        
            //ensure that the center of the image is fixed even when
            //rescaled so that it appears to scale from the center out
            //rather than a corner
            
            fr.size.width = orig_width * (scale/sqrt(2)); //rescale the image
            fr.size.height = orig_height * (scale/sqrt(2));
            fr.origin.x = old_origin_x - ((fr.size.width - old_width)/2);
            fr.origin.y = old_origin_y - ((fr.size.height - old_height)/2);
            
            _img.frame = fr;
    

    Figure 9.1: Adjusting the Result of a Scale++++

    The following picture is a pictorial description of what's happening in the above code:



    Figure 9.2: Scaled Image

    Intel, Ultrabook, and the Intel logo are trademarks of Intel Corporation in the US and/or other countries.

    *Other names and brands may be claimed as the property of others.

    Copyright © 2013 Intel Corporation. All rights reserved.

    ++This sample source code includes XAML code automatically generated by Visual Studio IDE and is released under the Intel OBL Sample Source Code License (MS-LPL Compatible)

    +++This sample source code is released under the Microsoft Limited Public License (MS-LPL)

    ++++This sample source code is released under the Intel IBL Apple Inc. Software License Agreement for XCode Agreement

  • Windows* 8 UI
  • Developers
  • Microsoft Windows* 8
  • Windows*
  • Porting
  • Touch Interfaces
  • URL
  • Viewing all 533 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>