Quantcast
Channel: Опыт пользователя и дизайн
Viewing all 360 articles
Browse latest View live

Android NDK standalone toolchain and x86-icc

$
0
0

Hi,

I'm trying to build libpng with x86-icc. I already built it using the x86 standard toolchain from the ndk, but want to use the intel compiler now.

Therefore I try to make a new standalone toolchain with the following command:

$ANDROID_NDK_ROOT/build/tools/make-standalone-toolchain.sh  --platform=android-9 --install-dir=$PLATFORM_PREFIX --toolchain=x86-icc --verbose

Wich fails with the following output:

Auto-config: --arch=x86

Targetting CPU: x86

Using GCC version:

Toolchain /<path to my android ndk>/android-ndk-r9d/toolchains/x86-icc/prebuilt/darwin-x86_64/bin/i686-linux-android-gcc is missing!

I've checked and there is no prebuilt directory in .../x86-icc. How do I get a standalone build chain for android ndk that uses he intel compiler? Any help is greatly appreciated!

I'm running this on macos x with android ndk-r9d


Krita* Gemini* - Новые возможности на устройствах 2 в 1

$
0
0

Загрузить PDF [Eng]

Почему устройства 2 в 1?

Устройство 2 в 1— это компьютер, который может быть как ноутбуком, так и планшетом. В режиме ноутбука (настольном режиме) основными устройствами ввода являются клавиатура и мышь. В режиме планшета для ввода используется сенсорный экран, то есть применяется касание пальцами или пером. Устройство 2 в 1, как и ультрабук Intel®, обеспечивает точность и управляемость при использовании любых методов ввода: можно печатать на клавиатуре при работе, или управлять касаниями в играх.

Разработчикам приходится рассматривать разные сценарии в своих приложениях, чтобы использовать преимущества нового типа устройств. У некоторых приложений в обоих режимах можно сохранить одинаковое меню и внешний вид. В других программах, таких как Krita Gemini для Windows* 8, функции и элементы интерфейса в разных режимах будут различаться. Krita — это программа для создания набросков и рисования. Это полнофункциональное решение для создания цифровых изображений с нуля. В этой статье описывается, как разработчики Krita реализовали определение режима устройства 2 в 1, включая автоматическое переключение режима и переключение режима пользователем. Кроме того, рассматриваются определенные области, о которых разработчикам следует позаботиться при реализации приложений 2 в 1.

Введение

В ходе многолетнего развития компьютерной техники применялись самые разные методы ввода, от перфокарт и командной строки до указателей, таких как мышь. С появлением сенсорных экранов для выбора объектов на экране можно использовать не только мышь, но также пальцы и перо. В большинстве случаев далеко не все задачи удобно выполнять с сенсорным управлением, но в приложениях с возможность выбора метода ввода, таких как Krita Gemini, в этом и нет нужды. Устройства 2 в 1 позволяют выбрать наиболее удобный режим пользовательского интерфейса на одном устройстве.

Устройство 2 в 1 может переводиться из режима ноутбука в режим планшета и обратно разными способами (Рис. 1 и Рис. 2). Множество примеров доступно на сайте Intel. Устройство можно перевести в режим планшета из режима ноутбука, отсоединив экран от клавиатуры либо отключив клавиатуру и сделав экран основным устройством ввода (например, сложив экран поверх клавиатуры). Производители компьютеров предоставляют информацию о смене режима работы оборудования операционной системе. Событие API Windows* 8 WM_SETTINGCHANGE и текстовый параметр ConvertibleSlateModeсигнализируют об автоматическом переключении в режим планшета и обратно в режим ноутбука. Кроме того, разработчикам рекомендуется для удобства пользователей добавить и кнопку, чтобы можно было переключать режимы вручную.

Устройства 2 в 1 могут переключаться между режимами планшета и ноутбука разными способами; аналогично можно создать и программу таким образом, чтобы она разными способами реагировала на переключение режимов. В некоторых случаях в режиме планшета следует сохранить пользовательский интерфейс как можно более близким к режиму ноутбука; в других случаях можно внести довольно существенные изменения. Корпорация Intel сотрудничает со многими поставщиками, чтобы помочь им реализовать разные режимы работы трансформеров в своих приложениях. Специалисты Intel помогли разработчикам KO GmBH объединить функциональность приложения Krita Touch с популярным приложением для рисования с открытым исходным кодом Krita (для ноутбука) в новом приложении Krita Gemini. Над проектом Krita работает активное сообщество разработчиков, приветствующее новые идеи и предоставляющее высококачественную поддержку. Команда добавила механизмы для гладкого преобразования из режима ноутбука (мышь и клавиатура) в сенсорный интерфейс в режиме планшета. См. преобразование пользовательского интерфейса Krita Gemini в кратком видеоролике на Рис. 3.


Рисунок 3. Видео Преобразование пользовательского интерфейса Krita Gemini UI,щелкните значок для запуска

Intel сотрудничает со множеством производителей ПК, чтобы развивать направление 2 в 1. Intel Developer Zone предоставляет различные ресурсы, чтобы помочь разработчикам в создании приложений. Смотрите раздел Дополнительные ресурсы в конце статьи.

Создание в режиме планшета, улучшение в режиме ноутбука

Разработчики Gemini постарались максимально использовать все возможности интерфейса в обоих режимах работы. На Рис. 4 и Рис. 5 видно, что пользовательский интерфейс в двух режимах отличается очень сильно. Это дает пользователю возможность плавно перейти от рисования «в поле» в режиме планшета к ретушированию и проработке более тонких деталей в режиме ноутбука.


Рисунок 4: Пользовательский интерфейс Krita Gemini в режиме планшета


Рисунок 5: Пользовательский интерфейс Krita Gemini в режиме ноутбука

Существует три основных этапа для реализации в приложении поддержки переключения между двумя режимами работы.

Этап 1 — Поддержка сенсорного управления.Нам повезло: поддержка сенсорного вводаполучила широкое распространение до появления устройств 2 в 1. Обычно для реализации этого механизма требуется значительно больше усилий, чем для переключения между режимами планшета и ноутбука. Корпорация Intel опубликовала статьи о добавлении поддержки сенсорного интерфейса в приложениях для Windows 8.

Этап 2  Поддержка переключения режимов. В первой части видеоролика (Рис. 3) показана автоматическая смена режима на основе срабатывания датчика; в данном случае это поворот (Рис. 6). После этого показана смена режима при нажатии пользователем соответствующей кнопки в приложении (Рис. 7).


Рисунок 6:Переключение приложения по срабатыванию датчика при смене режима


Рисунок 7:Кнопка переключения Switch to Sketch — переключение в режим планшета запускается пользователем

Для автоматического переключения требуется определить состояние датчика, отслеживать его и выполнять соответствующие действия при известном состоянии. Кроме того, для удобства пользователей следует добавить и возможность переключать режимы работы приложения вручную. Пример добавления смены режима на основе датчика можно найти в статье Intel «How to Write a 2 in 1 Aware Application». Код приложения Krita для управления переключениями между режимами можно найти в исходном коде этого приложения, выполнив поиск по слову SlateMode. Программа Krita распространяется на условиях лицензии GNU Public License. Для получения последней информации см. репозиторий исходного кода.

// Snip from Gemini - Define 2-in1 mode hardware states:

#ifdef Q_OS_WIN
#include <shellapi.h>
#define SM_CONVERTIBLESLATEMODE 0x2003
#define SM_SYSTEMDOCKED 0x2004
#endif

Не все компьютеры с сенсорным управлением поддерживают автоматическое переключение, поэтому мы рекомендуем поступить так, как сделали разработчики Krita Gemini, и добавить в приложение кнопку, с помощью которой пользователи могли бы вручную переключать режимы работы приложения. Кнопка Gemini показана на Рис. 7. Смена пользовательского интерфейса по нажатию кнопки происходит так же, как при срабатывании механического датчика. Информация на экране и устройство ввода по умолчанию изменятся с сенсорного экрана и крупных значков в режиме планшета на клавиатуру, мышь и мелкие значки в режиме ноутбука. Тем не менее, поскольку датчик не срабатывал, метод, использующий кнопку, должен сменять параметры экрана, значков и устройства ввода по умолчанию без получения данных о состоянии датчика. Поэтому разработчики должны предоставить пользователю возможность переключения между режимами мышью или касанием вне зависимости от состояния кнопки, если пользователь вдруг выберет не тот режим, который ему нужен.

Определение кнопки Kaction(), ее состояния и действия показаны в коде ниже:

// Snip from Gemini - Define 2-in1 Mode Transition Button:

         toDesktop = new KAction(q);
         toDesktop->setEnabled(false);
         toDesktop->setText(tr("Switch to Desktop"));
SIGNAL(triggered(Qt::MouseButtons,Qt::KeyboardModifiers)), q, SLOT(switchDesktopForced()));
         connect(toDesktop,
SIGNAL(triggered(Qt::MouseButtons,Qt::KeyboardModifiers)), q, SLOT(switchToDesktop()));
sketchView->engine()->rootContext()->setContextProperty("switchToDesktop
sketchView->Action", toDesktop);

Затем разработчики занялись обработкой событий, переключаемых кнопкой. Сначала нужно проверить последнее известное состояние системы, затем идет смена режима:

// Snip from Gemini - Perform 2-in1 Mode Transition via Button:

#ifdef Q_OS_WIN
bool MainWindow::winEvent( MSG * message, long * result ) {
     if (message && message->message == WM_SETTINGCHANGE && message->lParam)
     {
         if (wcscmp(TEXT("ConvertibleSlateMode"), (TCHAR *) message->lParam) == 0)
             d->notifySlateModeChange();
         else if (wcscmp(TEXT("SystemDockMode"), (TCHAR *)
message->lParam) == 0)
             d->notifyDockingModeChange();
         *result = 0;
         return true;
     }
     return false;
}
#endif

void MainWindow::Private::notifySlateModeChange()
{
#ifdef Q_OS_WIN
     bool bSlateMode = (GetSystemMetrics(SM_CONVERTIBLESLATEMODE) == 0);

     if (slateMode != bSlateMode)
     {
         slateMode = bSlateMode;
         emit q->slateModeChanged();
         if (forceSketch || (slateMode && !forceDesktop))
         {
             if (!toSketch || (toSketch && toSketch->isEnabled()))
                 q->switchToSketch();
         }
         else
         {
                 q->switchToDesktop();
         }
         //qDebug() << "Slate mode is now"<< slateMode;
     }
#endif
}

void MainWindow::Private::notifyDockingModeChange()
{
#ifdef Q_OS_WIN
     bool bDocked = (GetSystemMetrics(SM_SYSTEMDOCKED) != 0);

     if (docked != bDocked)
     {
         docked = bDocked;
         //qDebug() << "Docking mode is now"<< docked;
     }
#endif
}

Этап 3 — Тестирование и отладка.Использование палитры при сенсорном управлении или при управлении мышью очень просто, но сама рабочая область должна сохранять фокус и масштабирование согласно ожиданиям пользователей. Поэтому увеличение размера всех объектов было невозможно. Для сенсорного взаимодействия в режиме планшета можно увеличить элементы управления, но самим изображением на экране нужно управлять на другом уровне, чтобы сохранить ожидаемое удобство работы. Обратите внимание на видео (Рис. 3), что изображение в области редактирования сохраняет неизменный размер на экране в обоих режимах. В этой области разработчикам пришлось потрудиться, чтобы сохранить одинаковую площадь пространства экрана. Еще одна проблема состояла в том, что оба пользовательских интерфейса работали одновременно: это сильно влияло на производительность, поскольку оба интерфейса совместно использовали одни и те же графические ресурсы. Оба интерфейса были доработаны таким образом, чтобы использовать разные ресурсы, а приоритет в системе отдавался бы активному интерфейсу.

Заключение

Как видно, добавление поддержки переключения режимов 2 в 1 в приложении — достаточно простой процесс. Нужно тщательно изучить, как пользователи будут взаимодействовать с вашим приложением в каждом из режимов. Прочтите статью Intel «Write Transformational Applications for 2 in 1 Devices Based on Ultrabook™ Designs» для получения дополнительной информации о создании приложений с изменяющимся пользовательским интерфейсом. Для Krita Gemini было принято решение реализовать простые возможности рисования в режиме планшета, а ретушировать и прорабатывать детали рисунков в режиме ноутбука. Что вы можете выделить в вашем приложении, предлагая его пользователям в режиме планшета по сравнению с режимом ноутбука?

Дополнительная информация

  1. Intel.com: Introducing the Intel Developer Zone
  2. Intel.com: 2 in 1 Information
  3. Intel.com: Touch Developer Guide for Ultra Mobile Devices
  4. Intel.com: Developer's Guide for Intel® Processor Graphics for 4th Generation Intel® Core™ Processor
  5. Intel.com: Ultrabook and Tablet Windows* 8 Sensors Development Guide
  6. Intel® Article: Ideum GamePlay: Touch Controls for Your Favorite Games
  7. Intel® Article: Designing for Ultrabook Devices and Touch-enabled Desktop Applications
  8. Intel® Article: How to Write a 2 in 1 Aware Application by Stephan Rogers
  9. Intel® Article: Mixing Stylus and Touch Input on Windows* 8 by Meghana Rao
  10. Intel® Developer Forum 2013 Presentation (PDF): Write Transformational Applications for 2 in 1 Devices Based on Ultrabook™ Designs by Meghana Rao
  11. Krita Gemini*: General Information
  12. Krita Gemini: Executable download (scroll to Krita Gemini link)
  13. Krita Gemini Mode Transition: Source Code Download
  14. KO GmbH Krita Gemini: Source Code and License Repository

Другие статьи Intel

Ultrabook Device and Tablet Windows Touch Developer Guide
All-in-One PC: What are the Developer Possibilities?
Windows 8* Store vs Desktop App Development

Дополнительные ресурсы Intel

Intel® Developer Zone 
Intel® Graphics Performance Analyzers 
Developing Power-Efficient Apps for Ultrabook™ Devices 
Ultrabook™ App Lab 
Windows* 8.1 Preview – What’s New for Developers
Ultrabook™ and Tablet Windows* 8 Sensors Development Guide

Об авторе

Тим Дункан — инженер Intel. Друзья называют его Мистер Гаджет. В настоящее время он помогает разработчикам встраивать новые технологии в создаваемые решения. Тим обладает многолетним опытом работы в отрасли, сфера его профессиональных интересов весьма обширна — от производства микросхем до интеграции систем. Найдите его на сайте Intel® Developer Zone: Tim Duncan (Intel).

Intel и эмблема Intel являются товарными знаками корпорации Intel в США и в других странах.
© Intel Corporation, 2013—2014. Все права защищены.
*Прочие наименования и товарные знаки могут быть собственностью третьих лиц.

  • 2 in 1
  • Krita Gemini
  • ultrabook
  • Graphics Tools
  • Разработчики
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Интерфейс взаимодействия с пользователем
  • Windows*
  • Разработка игр
  • Графика
  • Датчики
  • Сенсорные интерфейсы
  • Опыт пользователя и дизайн
  • Ноутбук
  • Планшетный ПК
  • URL
  • Gameplay: Сенсорное управление и поддержка 2 в 1 в игровых приложениях

    $
    0
    0

    Загрузить статью [Eng., PDF 632KB]

    GestureWorks Gameplay— это новый способ взаимодействия с популярными играми на ПК. Программа Gameplay для Windows 8 позволяет игрокам использовать и создавать собственные виртуальные контроллеры для сенсорного управления, которые можно применять в существующих играх. Каждый виртуальный контроллер добавляет кнопки, жесты и прочие элементы управления, которые сопоставляются с поддерживаемыми игрой средствами управления. Кроме того, игроки могут применять сотни индивидуально настраиваемых жестов для взаимодействия с игрой на экране. Сотрудничество компании Ideum с корпорацией Intel позволило получить доступ к технологиям и инженерным ресурсам, чтобы реализовать возможности сенсорного управления в Gameplay.

    Посмотрите этот короткий ролик, поясняющий принцип работы Gameplay.

    Виртуальные контроллеры

    В отличие от традиционных игровых контроллеров виртуальные контроллеры полностью настраиваются, а игроки могут обмениваться ими с друзьями. Программа Gameplay работает на планшетах с Windows 8, ультрабуках, устройствах 2 в 1, моноблоках и даже мультисенсорных планшетахс большими экранами.


    Рисунок 1‒ Программа Gameplay в действии на планшете с процессором Intel Atom

    «Виртуальный контроллер реален! Программа Gameplay охватывает сотни игр для ПК, не поддерживающих сенсорное управление, и позволяет играть в них на совершенно новом поколении мобильных устройств, — говорит Джим Спадаччини (Jim Spadaccini), директор компании Ideum, создавшей GestureWorks Gameplay. — Виртуальные контроллеры Gameplay лучше физических контроллеров, поскольку их можно полностью настраивать и изменять. Мы с интересом ожидаем распространения Gameplay среди игроков».


    Рисунок 2‒ Главная страница Gameplay

    Вместе с GestureWorks Gameplay поставляется несколько десятков уже готовых виртуальных контроллеров для популярных игр для Windows (в настоящее время поддерживается свыше 116 уникальных наименований). В программе Gameplay пользователи также могут настраивать существующие контроллеры и изменять их раскладку. Программа также включает удобное в использовании средство создания виртуальных контроллеров: пользователи могут создавать собственные контроллеры для множества популярных игр для Windows, распространяемых с помощью службы Steam.


    Рисунок 3‒ Раскладка виртуального контроллера

    Пользователи могут размещать джойстики, переключатели, колесики прокрутки и кнопки в любых местах экрана, изменять размер и прозрачность элементов управления, добавлять цвета и подписи. Также можно создавать несколько представлений разметки и переключаться между ними в любой момент игры. Благодаря этому пользователь может создавать представления для разных действий в игре, например в ролевой игре можно создать одно представление для боя, а другое — для управления снаряжением.


    Рисунок 4‒ Глобальное представление жестов виртуального контроллера

    Программа Gameplay, в основе которой лежит движок обработки жестов GestureWorks Core, поддерживает свыше 200 глобальных жестов. Простейшие глобальные жесты, такие как касание, перетаскивание, сведение/разведение пальцев и поворот, поддерживаются по умолчанию, но также настраиваются. Это позволяет расширять возможности контроллеров и применять мультисенсорные жесты для дополнительного управления в играх на ПК. Например, определенные действия или боевые приемы в шутерах от первого лица можно активировать одним жестом вместо нажатия нескольких кнопок. Gameplay также включает экспериментальную поддержку акселерометров: в гонках можно поворачивать, наклоняя ультрабук или планшет. Программа обнаруживает переключение устройств 2 в 1 в режим планшета, чтобы при необходимости включить виртуальный контроллер.

    Проблемы, возникшие при разработке

    Разрабатывать такую удобную программу было непросто. Для воплощения идеи Gameplay в жизнь потребовалось преодолеть ряд технических проблем. Некоторые проблемы удалось решить традиционными методами программирования, для других проблем потребовались более изощренные решения.

    Поддержка переключения 2 в 1

    Еще в самом начале разработки Gameplay, мы решили добавить поддержку устройств 2 в 1. Идея была в том, что приложение работает постоянно, но отображение контроллеров не происходит в настольном режиме. В случае перехода в режим планшета, контроллер Gameplay начинает отображаться для обеспечения сенсорного управления в приложении. Активируйте поддержку в настройках виртуального контроллера на устройствах 2 в 1.


    Рисунок 5Настройки виртуального контроллера

    Для тех, кто хочет получить дополнительную информацию о переключении режимов в устройствах 2 в 1, в разделе Ресурсы приведена отличная рекомендация с примерами кода.

    Внедрение DLL

    Внедрение DLL (DLL injection) — это метод, используемый для выполнения кода внутри адресного пространства другого процесса путем загрузки внешней библиотеки динамической компоновки. Внедрение DLL часто используется внешними программами для вредоносных целей, но эту технологию можно использовать и в «мирных целях», чтобы расширить возможности программы так, как это не было предусмотрено ее авторами. В программе Gameplay нам требовался способ вставки данных в поток входных данных процесса (запущенной игры), чтобы сенсорный ввод можно было преобразовать во входные данные, распознаваемые игрой. Из множества методов реализации внедрения DLL программисты Ideum выбрали вызовы подключения к Windows в API SetWindowsHookEx. В конечном итоге выбрали подключения к определенным процессам вместо глобальных подключений по соображениям производительности.

    Запуск игр из сторонней оболочки запуска

    Мы изучили два метода подключения к адресному пространству целевых процессов. Приложение может подключиться к адресному пространству запущенного процесса, или же приложение может запустить целевой исполняемый файл в качестве дочернего процесса. Оба метода вполне жизнеспособны. Тем не менее на практике оказалось, что намного проще отслеживать и перехватывать процессы или потоки, созданные целевым процессом, если наше приложение является родительским по отношению к этому целевому процессу.

    При этом возникает проблема для клиентов приложений, таких как Steam и UPlay, запускаемых при входе пользователя в систему. В Windows не предоставляется гарантированного порядка запуска процессов, а процесс Gameplay должен быть запущен до этих процессов, чтобы получить возможность подключаться к управлению. Gameplay решает эту проблему путем установки компактной системной службы, которая отслеживает запуск приложений при входе пользователя в систему. Если запускается одно из интересующих нас клиентских приложений, Gameplay может подключиться к ней в качестве родительского процесса, и тогда элементы управления будут отображаться нужным образом.

    Полученный опыт

    Фильтрация данных мыши

    При разработке мы столкнулись с тем, что в некоторых игра неверно обрабатывались входные данные виртуальной мыши, полученные с сенсорного экрана. Эта проблема чаще всего возникала в шутерах от первого лица или в ролевых играх, где при помощи мыши выбирается направление взгляда. Проблема состояла в том, что входные данные мыши, полученные с сенсорного экрана, были абсолютными по отношению к какой-либо точке на экране, а значит, и в игровой среде. Изза этого сенсорный экран был практически бесполезным в качестве устройства для управления направлением взгляда с помощью мыши. Решить эту проблему удалось путем фильтрации входных данных мыши, перехватывая поток входных данных игры. Это дало возможность имитировать входные данные мыши для управления направлением взгляда с помощью такого экранного элемента управления, как джойстик. Потребовалось немало времени и усилий, чтобы настроить чувствительность джойстика и зону нечувствительности, чтобы они по ощущениям совпадали с мышью, но когда это было сделано, все прекрасно заработало. Это исправление можно увидеть в действии в таких играх, как Fallout: New Vegasили The Elder Scrolls: Skyrim.

    Отбор игр для сенсорного управления

    Разработчики Ideum потратили немало времени, настраивая виртуальные контроллеры для оптимального удобства в играх. Различные элементы игр определяют пригодность игры для Gameplay. Ниже приводятся общие правила, определяющие, какие типы игр хорошо работают с Gameplay.

    Удобство использования Gameplay для разных типов игр

    Хорошо

    Лучше

    Отлично

    • Ролевые игры (RPG)
    • Симуляторы
    • Боевые игры
    • Спорт
    • Гонки
    • Головоломки
    • Стратегии в реальном времени (RTS)
    • Шутеры с видом от третьего лица
    • Платформеры
    • Игры с боковой прокруткой
    • Приключенческие игры

    Удобство игры — это важный фактор использования или отказа от использования Gameplay для этой игры, но самый важный фактор — это стабильность. Некоторые игры вообще не работают ни с подключением к управлению, ни с внедрением входных данных, ни с наложением. Это может происходить по разным причинам, но чаще всего сама игра отслеживает свое пространство в памяти или поток входных данных, чтобы избежать подмены данных. Программа Gameplay сама по себе является вполне безопасной и разрешенной, но она использует технологии, которые также применяются для вредоносных целей, поэтому, к сожалению, некоторые игры не будут работать с Gameplay, если в них нет встроенной поддержки сенсорного управления.

    Отзывы пользователей

    Хотя программа Gameplay 1.0 пока находится на довольно ранней стадии разработки, мы получили интересные отзывы пользователей в отношении сенсорного управления играми на ПК. В полученных отзывах уже можно проследить некоторые вполне ясные тенденции. Во-первых, ясно, что всем в целом нравится возможность настраивать сенсорный интерфейс в играх. Остальные отзывы касаются настройки интерфейса в играх в некоторых ключевых областях:

    • Многие виртуальные контроллеры не очень удобны для левшей. Именно это было изменено раньше всего во многих опубликованных виртуальных контроллерах.
    • Чаще всего пользователи изменяют размер кнопок и их расположение, поэтому Ideum рассматривает возможность добавления автоматической калибровки кнопок по размеру руки в будущей версии Gameplay.
    • Многие пользователи предпочитают использовать сенсорную прокрутку, а не дискретное касание и отпускание.

    Мы ожидаем, что удастся выявить и другие тенденции по мере увеличения количества созданных виртуальных контроллеров.

    Заключение

    GestureWorks Gameplay привносит возможности сенсорного управления в ваши любимые игры. Для этого используется сочетание визуального наложения элементов управления и поддерживаются дополнительные способы взаимодействия, такие как жесты, акселерометры и переключение трансформеров в разные режимы работы. Наибольший интерес при работе над этим проектом представляют отзывы пользователей. Люди очень рады возможности играть на ПК с сенсорным управлением. Они с огромным удовольствием наслаждаются своими любимыми играми, применяя сенсорный экран.

    Об авторах

    Erik Nimeyer работает инженером по программному обеспечению в отделе Software and Solutions Group корпорации Intel. Эрик занимался оптимизацией производительности приложений на микропроцессорах Intel в течение почти 15 лет. Его специализация — разработка новых пользовательских интерфейсов и настройка на уровне микроархитектуры. В свободное от работы время он занимается альпинизмом. Связаться с ним можно по адресу erik.a.niemeyer@intel.com.

    Chris Kirkpatrick — инженер по программному обеспечению в отделе Intel Software and Services Group. Он поддерживает графические решения Intel для мобильных платформ в группе Visual & Interactive Computing Engineering. Он является бакалавром компьютерных наук в Университете штата Орегон. Связаться с ним можно по адресу chris.kirkpatrick@intel.com.

    Ресурсы

    https://gameplay.gestureworks.com/

    Дополнительные материалы

    How to Write a 2-In-1 Aware Application: /en-us/articles/how-to-write-a-2-in-1aware-application
    Krita Gemini Development of a 2-In-1 Aware Application with Dynamic UI for Laptop and Tablet Modes: /en-us/articles/krita-gemini-twice-as-nice-on-a-2-in-1
    Detecting 2 in 1 Conversion Events & Screen Orientation in a 2 in 1 Device: /en-us/articles/detecting-slateclamshell-mode-screen-orientation-in-convertible-pc

    Видео

    Gestureworks Gameplay on an Ideum 46 Inch Multi-Touch Table

  • ideum
  • GestureWorks; Ultrabook
  • virtual controller
  • Разработчики
  • Microsoft Windows* 8
  • Windows*
  • Начинающий
  • Разработка игр
  • Датчики
  • Сенсорные интерфейсы
  • Опыт пользователя и дизайн
  • Ноутбук
  • Планшетный ПК
  • URL
  • Build error when use IPP libs in NDK.

    $
    0
    0

    Test code is copy from "Building Android* NDK applications with Intel® Integrated Performance Primitives (Intel® IPP)" : https://software.intel.com/en-us/android/articles/building-android-ndk-applications-with-intel-ipp

    1. And install the IPP for windows,

    2. copy the include folder to my project.   jni/ipp/include

    3. copy the libippcore.a, libippi.a, libipps.a..... to the project,  jni/ipp/lib/ia32 .   It is strange that the libippcore.a... files are in intel/Composer XE 2013 SP1/ipp/lib/mic,  ------ Are these files should be in mic folder??????

     

    The build  whole project, It seems the source file build is ok, but failed in linke stage.

    [x86] SharedLibrary : libIppAdd.so D:/android-ndk-r9d-windows-x86/android-ndk-r9d/toolchains/x86-4.6/prebuilt/windows/bin/../lib/gcc/i686-linux-android/4.6/../../../../i686-linux-android/bin/ld.exe: fatal error: D:/android-ndk-r9d-windows-x86/android-ndk-r9d/project/hello_test_exe_ipp/jni/ipp/lib/ia32/libippcore.a(ippinit.o): unsupported ELF machine number 181 collect2: ld returned 1 exit status /cygdrive/d/android-ndk-r9d-windows-x86/android-ndk-r9d/build/core/build-binary.mk:588: recipe for target '/cygdrive/d/android-ndk-r9d-windows-x86/android-ndk-r9d/project/hello_test_exe_ipp/obj/local/x86/libIppAdd.so' failed make: *** [/cygdrive/d/android-ndk-r9d-windows-x86/android-ndk-r9d/project/hello_test_exe_ipp/obj/local/x86/libIppAdd.so] Error 1     What is the reason for this error ---libippcore.a(ippinit.o): unsupported ELF machine number 181??   Thanks a lot!   Br, Haitao

    Troubles with HAXM Installation Workaround Patch

    $
    0
    0

    Hi,

    Sorry to bring this problem back, you probably heard it to many times, but I am simply not able to figure out how to make hexm work on my computer.

    After the error "Failed to configure driver: unknown error. Failed to open driver" I followed the instructions for the workaround given in the link below.

    https://software.intel.com/en-us/blogs/2013/04/25/workaround-patch-for-haxm-installation-error-failed-to-configure-driver-unknown

    The hax_extract.cmd exits with the following error:

    DIFXDRVINSTALL: installing driver package.

    LOG: 1, ENTER:  DriverPackageInstallW

    LOG: 1, RETURN: DriverPackageInstallW  (0xA)

    ERROR: failed with error code 0x0000000A

    Failed to install driver.

    Any ideas how to fix this?

    Thanks

    Bernard

    ВложениеРазмер
    Скачатьhax_extract.log1.76 КБ

    Adding and Retrieving Closed-caption messages in AVC and MPEG2 streams

    $
    0
    0

    Legal Disclaimer

    In this article, we are going to show how to add and retrieve closed-caption data to the AVC and MPEG2 streams. In the first part, we will illustrate using code examples how to add CC messages to the AVC and MPEG2 streams. In the second part, we will show how to retrieve these messages using the decoder.

    To test the closed-captioning in AVC/MPEG2 using the following code snippets, we recommend using the tutorials instead of the samples. (You can download the tutorials from the TUTORIALS tab here). The tutorials are much easier to understand and have more comments to help you. For encoding, simple_6_encode_vmem_lowlatency tutorial, and for decoding, simple_2_decode tutorial.  In the code snippets below, we point you to the location where this code should be added for encode and decode.

    Let's get started!

    Adding Closed-caption Messages to the AVC and MPEG2 Encode Stream

    In this section, we will see how to add closed-caption data to AVC and MPEG2 streams during the encoding stage. Below are the high-level how-to steps, and we will follow that up with a code snippet.

    Step 1: Create the SEI/user_data payload with the appropriate header and message fields. For details on the AVC and MPEG2 payload structures, refer to this document. You can also refer to the following -  for AVC stream (Section 6.4.2 in the document), and for MPEG2 stream (Section 6.2.3 in the document).

    Step 2a:Populate the mfxPayload structure with this payload and header information.

    Step 2b: Pass the mfxPayload structure to mfxEncodeCtrl structure 

    Step 3:Pass this structure to the EncodeFrameAsync function (first parameter), and you're done!

    Code example to add CC messages to AVC, in the main encoding loop:

    #define SEI_USER_DATA_REGISTERED_ITU_T_T35 4
    #define MESSAGE_SIZE 20
    typedef struct
    			{
    				 unsigned char countryCode;
    				 unsigned char countryCodeExtension[2];
    				 unsigned char user_identifier[4];
    				 unsigned char type_code;
    				 unsigned char payloadBytes[MESSAGE_SIZE];
    				 unsigned char marker_bits;
    			} userdata_reg_t35;
    
    			/*** STEP 1: START: SEI payload: refer to <link to AVC> for the format */
    			/* Populating the header for the SEI payload. In most cases, these assignments will not change */
    			userdata_reg_t35 m_userSEIData;
    			m_userSEIData.countryCode = 0xB5;
    			m_userSEIData.countryCodeExtension[0] = 0x31;
    			m_userSEIData.countryCodeExtension[1] = 0x00;
    			m_userSEIData.user_identifier[0] = 0x34;
    			m_userSEIData.user_identifier[1] = 0x39;
    			m_userSEIData.user_identifier[2] = 0x41;
    			m_userSEIData.user_identifier[3] = 0x47;
    			m_userSEIData.type_code = 0x03;
    			m_userSEIData.marker_bits = 0xFF;
    
    			/* Populate the actual message. In this example it is "Frame: <frameNum>" */
    			memset(m_userSEIData.payloadBytes,0,MESSAGE_SIZE);
    			sprintf((char*)m_userSEIData.payloadBytes, "%s%d", "Frame: ", nFrame);
    			/*** STEP 1: END: SEI payload: refer to <link to AVC> for the format */
    
    			/*** STEP 2a: START: Fill mfxPayload structure with SEI Payload */
    			mfxU8 m_seiData[100]; // Arbitrary size
    			mfxPayload m_mySEIPayload;
    			memset(&m_mySEIPayload, 0, sizeof(m_mySEIPayload));
    			m_mySEIPayload.Type = SEI_USER_DATA_REGISTERED_ITU_T_T35;
    			m_mySEIPayload.BufSize = sizeof(userdata_reg_t35) + 2; // 2 bytes for header
    			m_mySEIPayload.NumBit = m_mySEIPayload.BufSize * 8;
    			m_mySEIPayload.Data = m_seiData;
    
    			// Insert SEI header and SEI msg into data buffer
    			m_seiData[0] = (mfxU8)m_mySEIPayload.Type; // SEI type
    			m_seiData[1] = (mfxU8)(m_mySEIPayload.BufSize-2); // Size of following msg
    			memcpy(m_seiData+2, &m_userSEIData, sizeof(userdata_reg_t35));
    			mfxPayload* m_payloads[1];
    			m_payloads[0] = &m_mySEIPayload;
    			/*** STEP 2a: END: Fill mfxPayload structure with SEI Payload */
    
    			/*** STEP 2b: START: Encode control structure initialization */
    			mfxEncodeCtrl m_encodeCtrl;
    			memset(&m_encodeCtrl, 0, sizeof(m_encodeCtrl));
    			m_encodeCtrl.Payload = (mfxPayload**)&m_payloads[0];
    			m_encodeCtrl.NumPayload = 1;
    			/*** STEP 2b: END: Encode control structure initialization */
    
    			....
    			nEncSurfIdx = Get Free Surface;
    			Sucface Lock;
    			pmfxSurfaces[nEncSurfIdx] = Load Raw Frame;
    			Surface Unlock;
    			....
    
    			/*** STEP 3: Encode frame: Pass mfxEncodeCtrl pointer */
    			sts = mfxENC.EncodeFrameAsync(&m_encodeCtrl, pmfxSurfaces[nEncSurfIdx], &mfxBS, &syncp);

     

    Code example to add CC messages to MPEG2, in the main encoding loop:

    Adding payloads to MPEG2 is similar to the AVC example above. The difference comes from the format used for the MPEG2 user_start_code as compared to SEI message. Below, we illustrate how to populate an MPEG2 payload. This is in accordance to the ATSC standard for MPEG2 Video.

    #define USER_START_CODE 0x1B2
    #define MESSAGE_SIZE 20
    typedef struct{
    				mfxU8 atsc_identifier[4];
    				mfxU8 type_code;
    				/* For type 0x03, some additional bits before the data field starts - refer to <link to cc_data()> */
    				mfxU8 additional_bits[2];
    				unsigned char cc_data[MESSAGE_SIZE];
    			}ATSC_user_data;
    
    			/** STEP 1 */
    			ATSC_user_data atsc;
    			atsc.atsc_identifier[0] = (mfxU8)0x34;
    			atsc.atsc_identifier[1] = (mfxU8)0x39;
    			atsc.atsc_identifier[2] = (mfxU8)0x41;
    			atsc.atsc_identifier[3] = (mfxU8)0x47;
    			atsc.type_code = 0x03;
    			atsc.additional_bits[0] = (mfxU8)0x12;	//00010010;		//cc_count, addnl data, cc data, em data processed
    			atsc.additional_bits[1] = (mfxU8)0xff;		//reserved
    			memset(atsc.cc_data,0,MESSAGE_SIZE);
    			sprintf((char*)atsc.cc_data, "%s%d", "Frame: ", nFrame);
    
    			/** STEP 2a */
    			mfxU8 m_seiData[100]; // Arbitrary size
    			mfxPayload m_Payload;
    			memset(&m_Payload, 0, sizeof(m_Payload));
    			m_Payload.Type = USER_START_CODE;
    			m_Payload.BufSize = MESSAGE_SIZE + 7;
    			m_Payload.NumBit = m_Payload.BufSize * 8;
    			m_Payload.Data = m_seiData;
    			memcpy(m_seiData, &atsc, 7);
    			memcpy(m_seiData+7, &atsc.cc_data, MESSAGE_SIZE);
    
    			mfxPayload* m_payloads[1];
    			m_payloads[0] = &m_Payload;
    
    			/** STEP 2b */
    			mfxEncodeCtrl m_encodeCtrl;
    			memset(&m_encodeCtrl, 0, sizeof(m_encodeCtrl));
    			m_encodeCtrl.Payload = (mfxPayload**)&m_payloads[0];
    			m_encodeCtrl.NumPayload = 1;
    
    			....
    			nEncSurfIdx = Get Free Surface;
    			Sucface Lock;
    			pmfxSurfaces[nEncSurfIdx] = Load Raw Frame;
    			Surface Unlock;
    			....
    
    			/*** STEP 3 */
    			sts = mfxENC.EncodeFrameAsync(&m_encodeCtrl, pmfxSurfaces[nEncSurfIdx], &mfxBS, &syncp);

    In this section, we have seen how to add payloads to the AVC and MPEG2 encode stream. You can verify the payloads are added by opening the output bitstream in an editor and viewing in HEX mode (for instance, GVim). You should see each frame carrying a payload that has the "Frame: <framenum>" along with the payload meta information. Below is an encoded out.h264 file that uses the above-mentioned code snippet in simple_6_encode_vmem_lowlatency tutorial to add CC captions. You can see the SEI message "Frame: 6" highlighted.

    Retrieving Messages from the Decoded Stream

    We just saw how to add CC messages to the encode stream. In this section, we will see how to retrieve the encoded SEI/userData messages from the AVC or MPEG2 streams respectively. The SDK provides an API GetPayload() for this purpose, and we will illustrate how to use this for AVC and MPEG cases. The GetPayload() function follows the DecodeFrameAsync() function call, and returns the mfxPayload structure populated with the message, number of bytes and timestamp. Please note that you are required to initialize the BufSize and Data members of the structure before you call the GetPayload() function. 

    sts = mfxDEC.DecodeFrameAsync(&mfxBS, pmfxSurfaces[nIndex], &pmfxOutSurface, &syncp);
    
    	mfxPayload dec_payload;
    	mfxU64 ts;
    	dec_payload.Data = new mfxU8[100];
    	dec_payload.BufSize = MESSAGE_SIZE;
    	dec_payload.NumBit = 1;
    
    	/* Since the decode function is asynchronous, we will loop over the GetPayload() function until we drain all the messages ready */
    	while(dec_payload.NumBit > 0)
    	{
    		mfxStatus st = mfxDEC.GetPayload(&ts, &dec_payload);
    		#ifdef AVC
    			if((st==MFX_ERR_NONE)&&(dec_payload.Type==SEI_USER_DATA_REGISTERED_ITU_T_T35))
    		#endif
    		#ifdef MPEG2
    			if((st==MFX_ERR_NONE)&&(dec_payload.Type==USER_START_CODE))
    		#endif
    			{
    				fwrite(dec_payload.Data, sizeof(unsigned char), dec_payload.BufSize, stdout); //For debug purpose - Prints out the payload on to the screen.
    			}
    	}

    Below is the output of the fwrite for decode of out.h264 file we encoded above. On the console is printed the output of the GetPayload() call for each frame.


    This concludes our howto add and retrieve CC messages in AVC and MPEG2 stream. You can also refer to our documentation section 4.14 for the pseudo-code, and for information on the APIs, you can refer to the mediasdk-man.pdf in the doc folder of the Media SDK install folder.

    Изображение значка: 

    Вложения: 

    https://software.intel.com/sites/default/files/managed/35/1b/h264_cc.png
    https://software.intel.com/sites/default/files/managed/2e/1b/h264_cc_getPayload.png
  • Technical Article
  • Медиа процессы
  • Опыт пользователя и дизайн
  • Комплект для разработки Intel® Media SDK
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Тема зоны: 

    IDZone
  • Практика создания корректного кода
  • Включить в RSS: 

    1

    How to build a dictionary android app

    $
    0
    0

    Guys, am new to android, i have never written any program in android but i want to do one. how do i start.

    Am interested in doing  a dictionary app for my native language (Swahili; a local language in Tanzania) but i don't kno how to start it up.

    i want the app to function like any other dictionary app but in Swahili (i mean words and their meanings should in Swahili).

    am good with JavaScript, HTML5, CSS3 and a little of java.

    help me, people.

    thanks.

    Intel MDK Release Request

    $
    0
    0

    Quote:

    Alexander Weggerle (Intel) wrote:Alex 

    Dear Alex , 

    I am writing Intel and you, on behalf of the 2013 Dell Venue 7 / 8 CLVP community and per the advise and recommendations of Dell's Technical Support Department, that Intel has the sole discretion on the release of the MDK for devices. 

    On behalf of 2013 Venue 7 owners in particular me and other early adopters to the Android on Intel Platform formally request that Intel work with Dell on releasing the MDK for the 2013 Dell Venue 7 P706T_NoModem / Thunderbird device line.  

    While i understand that when I and other early adopters made our decision on purchasing an Intel powered tablet the "Dell Venue 8 Developers Edition" wasn't released or even rumored most of us wouldn't of made the decision to purchase the Venue 7 over the Venue 8 had that information been available to us.

    Because of this we hope that Intel will take this into full consideration the users that were early adopters and release the MDK for the Dell Venue 7 as they did for the Dell Venue 8.

    It is unfair to us early adopters of the Android on Intel Platform to be forced to purchase an new device (with in a few short months of already purchasing an Intel Android Device) to use the MDK when the device can simply be converted. 

    While i understand Alex these decisions are not yours to make I hope that you will forward this along to the people that can make this decision and hopefully look forward to using my 2013 Dell Venue 7 with the MDK.

    thank you,

    The Dell Venue 7 / 8 CLVP Community 


    How to grab the OpenGL context created by Android's GLSurfaceView?

    $
    0
    0

    Hello Intel engineers and fellow Intel developers (and Maxim Shevtsov specifically):

    I've always liked the high-quality and hands-on tutorials/application notes written by Intel engineers. Now I am trying to follow the excellent article, OpenCL and OpenGL Interoperability Tutorial, by Maxim Shevtsov, and would like to ask a question.

    Maxim's article assumes that the reader has the OpenGL context already created, and that both the OpenGL context and HDC are available. That's a very good starting point for most people. However, that's where my challenges are.

    I am trying to develop an Android app to process (in OpenCL) every frame of the camera preview. The preview is implemented with Android's GLSurfaceView and SurfaceTexture, with the code like this:

    public class CamGLView extends GLSurfaceView

    {

        CamGLViewRenderer mRenderer;

        CamGLView(Context context) {

            super(context);

            mRenderer = new CamGLViewRenderer(this);

            setEGLContextClientVersion(3);  //OpenGL context is created, but how to get hold of it from JNI side ?

            setRenderer((Renderer)mRenderer);

            setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY);

        }

        ......

    }

    public class CamGLViewRenderer implements GLSurfaceView.Renderer, SurfaceTexture.OnFrameAvailableListener

    {

        @Override

        public void onSurfaceCreated(GL10 unused, EGLConfig config) {

            mSTexture = new SurfaceTexture(mTexID[0]);

            if (null == mSTexture) {

                Log.e(mTag, "Creating SurfaceTexture failed !");

                return;

            }

            mSTexture.setOnFrameAvailableListener(this);

            // Setup the camera preview

            try {

                mCamera.setPreviewTexture(mSTexture);

            } catch(IOException ioe) {

                Log.e(mTag, "Camera.setPreviewTexture Error! " + ioe.getMessage());

                ioe.printStackTrace();

                return;

            }

        }

        ......

    }

    The Android app is up and running. The camera preview is displayed. Now on the JNI side, I want to grab each preview frame, send it to the OpenCL kernel to be processed, and then send it back to OpenGL to be displayed. Maxim's article perfectly fits my application, except that I don't know a way to get the OpenGL context, created by the Java code, to begin with.

    Any ideas and thoughts to solve my problem? 

    Thank you very much,

    Robby

    Desenvolvendo Jogos 3D para Windows* 8 com C++ e Microsoft DirectX*

    $
    0
    0

    Download PDF

    Por Bruno Sonnino

    O desenvolvimento de jogos é um tópico sempre quente: todos gostam de jogar jogos, e eles estão entre os mais vendidos em qualquer lista. Mas, quando você fala em desenvolver um bom jogo, performance sempre é um requisito. Ninguém gosta de jogar jogos com paradas ou falhas, mesmo nos dispositivos mais baratos.

    Você pode usar muitas linguagens e frameworks para desenvolver um jogo, mas quando você quer performance em um jogo Windows*, nada se compara a Microsoft DirectX* com C++. Com estas tecnologias, você está perto do hardware, podendo usar todos os seus recursos e obter uma excelente performance.

    Decidi desenvolver um jogo destes, mesmo sendo primordialmente um desenvolvedor C#. Eu desenvolvi muito em C++ há algum tempo, mas a linguagem é bem diferente do que eu estava acostumado. Além disso, DirectX é um assunto novo para mim, assim este artigo é sobre o desenvolvimento de jogos do ponto de vista de um novato. Desenvolvedores mais experientes deverão desculpar meus erros.

    Neste artigo, vou mostrar como desenvolver um jogo chute de pênaltis no futebol. O jogo chuta a bola e o usuário move o goleiro para pegá-la. Nós não iremos iniciar do zero. Nós usaremos o Microsoft Visual Studio* 3D Starter Kit, um início lógico para aqueles que querem desenvolver jogos para o Windows 8.1.

    O Microsoft Visual Studio* 3D Starter Kit

    Após baixar o Starter Kit, você pode extraí-lo para uma pasta e abrir o arquivo StarterKit.sln. Esta solução tem um projeto para C++Windows 8.1pronto para ser executado. Se você executá-lo, verá algo como a Figura 1.


    Figura 1.Estado inicial do Microsoft Visual Studio* 3D Starter Kit

    Este programa demonstra diversos conceitos úteis:

    • Cinco objetos são animados: quatro formas girando em volta da chaleira e a chaleira “dançando”
    • Cada elemento tem um material diferente: alguns têm cores sólidas e o cubo tem um material que usa um bitmap.
    • A luz vem do topo à esquerda da cena.
    • O canto inferior direito contém um contador de frames por segundo (FPS).
    • Um indicador de placar está posicionado no topo.
    • Clicando em um objeto, ele é iluminado e o placar é incrementado.
    • Clicando com o botão direito ou arrastando o dedo a partir de baixo abre uma barra de aplicativo com dois botões para mudar a cor da chaleira.

    Você pode usar estes recursos para criar qualquer jogo, mas antes você irá ver os arquivos incluídos no kit.

    Vamos iniciar com App.xaml e os respectivos arquivos cpp/h. Quando você executa a aplicação em App.xaml ela executa DirectXPage. Em DirectXPage.xaml, você tem um SwapChainPanel e a barra de aplicativo. O SwapChainPanel é uma superfície que hospeda os gráficos DirectX numa página XAML. Ali, você pode adicionar objetos XAML que serão apresentados numa cena Microsoft Direct3D*—isto é conveniente para adicionar botões, textos e outros objetos XAML em um jogo DirectX sem a necessidade de criar os seus controles do zero. O Starter Kit também adiciona um StackPanel, que será usado como placar.

    DirectXPage.xaml.cpp tem a inicialização das variáveis, a ligação aos manipuladores de eventos para redimensionamento e mudança de orientação, os manipuladores para os eventos Click dos botões da barra do aplicativo e o loop de renderização. Todos os objetos XAML são manipulados da mesma maneira que em qualquer outro programa Windows 8. O arquivo também processa o evento Tapped, verificando se um toque (ou clique do mouse) toca um objeto. Se isso acontecer, o manipulador incrementa o placar para aquele objeto.

    Você deve dizer ao programa que o SwapChainPanel deve renderizar o conteúdo DirectX. Para fazer isso, de acordo com a documentação, você deve “converter a instância de SwapChainPanel para um IInspectable ou IUnknown, então chamar QueryInterface para obter uma referência para a interface ISwapChainPanelNative e permite a ponte de interoperabilidade). Então, chame SwapChainPanel and enables the interop bridge). Then, call ISwapChainPanelNative::SetSwapChain naquela referência para associar a cadeia de mudança que você implementou com a instância de SwapChainPanel.” Isto é feito no método CreateWindowSizeDependentResources em DeviceResources.cpp.

    O loop principal do jogo está em StarterKitMain.cpp aonde a página e o contador de FPS são renderizados.

    Game.cpp tem o loop do jogo e o teste de colisão. Ele calcula a animação no método Update e desenha todos os objetos no método Render. O contador de FPS é renderizado em SampleFpsTextRenderer.cpp.

    Os objetos do jogo estão na pasta Assets.Teapot.fbx tem a chaleira e GameLevel.fbx tem as quatro formas que se movem em volta da chaleira dançante.

    Com este conhecimento básico do Starter Kit, você pode iniciar a criação de seu jogo.

    Adicionando Recursos ao Jogo

    Você está desenvolvendo um jogo de futebol, assim o primeiro recurso que deve adicionar é uma bola de futebol. Em Gamelevel.fbx remova as quatro formas deste arquivo, selecionando uma a uma e teclando Delete. No Solution Explorer, exclua também CubeUVImage.png porque ele não será necessário; este arquivo é a textura usada para cobrir o cubo, que você acabou de excluir.

    O próximo passo é adicionar uma esfera ao modelo. Abra a caixa de ferramentas (se você não puder vê-la, clique em View > Toolbox) e dê um clique duplo na esfera, para adicioná-la ao modelo. Se ela parecer muito pequena, você pode aumentar o zoom clicando no segundo botão na barra de ferramentas do topo do editor, teclando Z para fazer um zoom com o mouse (arrastando para o centro aumenta o zoom) ou usando as teclas de seta para cima ou para baixo. Você também pode teclar Ctrl e usar a roda do mouse para fazer o zoom. Você deve ter algo semelhante à Figura 2.


    Figura 2. Editor de Modelos com uma forma esférica

    Esta esfera tem apenas uma cor branca com alguma luz sobre ela. Ela precisa de uma textura de bola de futebol. Minha primeira tentativa foi usar uma grade hexagonal, como aquela da Figura 3.


    Figura 3.Grade Hexagonal para a textura da bola: primeira tentativa

    Para aplicar a textura na esfera, selecione-a e, na janela Properties, atribua o arquivo .pngà propriedade Texture1. Embora isto parecesse uma boa ideia, o resultado não foi tão bom, como você pode ver na Figura 4.


    Figura 4.Esfera com textura aplicada

    Os hexágonos estão distorcidos por causa das projeções dos pontos da textura na esfera. Você precisa de uma textura distorcida, como aquela da Figura 5.


    Figura 5. Textura de bola de futebol adaptada à esfera

    Quando você aplica esta textura, a esfera começa a parecer com uma bola de futebol. Você deve apenas ajustar algumas propriedades para que ela pareça mais real. Para fazer isto, selecione a bola e edite o efeito Phong na janela Properties. O modelo de iluminação Phong inclui uma luz direcional e luz ambiente e simula propriedades refletivas no objeto. Este é um shader incluído no Visual Studio, que você pode arrastar da barra de ferramentas. Se você quiser saber mais sobre shaders e como desenhá-los usando o editor do Visual Studio, clique no link correspondente na seção “Para mais informações”. Configure as propriedades Red, Green, e Blue abaixo de MaterialSpecular para 0.2 e MaterialSpecularPower para 16. Agora você tem uma bola de futebol com aparência melhor (Figura 6).


    Figura 6. Bola de futebol acabada

    Se não quiser criar seus modelos no Visual Studio, você pode usar um modelo pronto da Web. O Visual Studio aceita qualquer modelo nos formatos FBX, DAE, e OBJ: você deve apenas adicioná-los a seus recursos na solução. Como um exemplo, você pode usar um arquivo .obj como aquele na Figura 7 (este é um modelo gratuito baixado de http://www.turbosquid.com).


    Figura 7. Modelo de bola tridimensional .obj

    Animando o Modelo

    Com o modelo no lugar, é hora de animá-lo. Antes disso, entretanto, eu quero remover a chaleira, pois ela não é necessária. Na pasta Assets, remova teapot.fbx. Em seguida, exclua o código usado para carregar e animá-la. Em Game.cpp, a carga dos modelos é feita de forma assíncrona em CreateDeviceDependentResources:

    // Load the scene objects.
    auto loadMeshTask = Mesh::LoadFromFileAsync(
    	m_graphics,
    	L"gamelevel.cmo",
    	L"",
    	L"",
    	m_meshModels)
    	.then([this]()
    {
    	// Load the teapot from a separate file and add it to the vector of meshes.
    	return Mesh::LoadFromFileAsync(

    Você deve mudar o modelo e remover a continuação da tarefa, de maneira que só a bola écarregada:

    void Game::CreateDeviceDependentResources()
    {
    	m_graphics.Initialize(m_deviceResources->GetD3DDevice(), m_deviceResources->GetD3DDeviceContext(), m_deviceResources->GetDeviceFeatureLevel());
    
    	// Set DirectX to not cull any triangles so the entire mesh will always be shown.
    	CD3D11_RASTERIZER_DESC d3dRas(D3D11_DEFAULT);
    	d3dRas.CullMode = D3D11_CULL_NONE;
    	d3dRas.MultisampleEnable = true;
    	d3dRas.AntialiasedLineEnable = true;
    
    	ComPtr<ID3D11RasterizerState> p3d3RasState;
    	m_deviceResources->GetD3DDevice()->CreateRasterizerState(&d3dRas, &p3d3RasState);
    	m_deviceResources->GetD3DDeviceContext()->RSSetState(p3d3RasState.Get());
    
    	// Load the scene objects.
    	auto loadMeshTask = Mesh::LoadFromFileAsync(
    		m_graphics,
    		L"gamelevel.cmo",
    		L"",
    		L"",
    		m_meshModels);
    
    
    	(loadMeshTask).then([this]()
    	{
    		// Scene is ready to be rendered.
    		m_loadingComplete = true;
    	});
    }
    
    

    O método ReleaseDeviceDependentResources precisa apenas limpar os modelos:

    void Game::ReleaseDeviceDependentResources()
    {
    	for (Mesh* m : m_meshModels)
    	{
    		delete m;
    	}
    	m_meshModels.clear();
    
    	m_loadingComplete = false;
    }
    
    

    O passo seguinte é mudar o método Update, de maneira que apenas a bola é girada:

    void Game::Update(DX::StepTimer const& timer)
    {
    	// Rotate scene.
    	m_rotation = static_cast<float>(timer.GetTotalSeconds()) * 0.5f;
    }

    Você manipula a velocidade de rotação usando o multiplicador (0.5f). Se quiser que a bola gire mais rápido, basta usar um multiplicador maior. Isto significa que a bola irá girar na velocidade de 0.5/(2 * pi) radianos por segundo. O método Render renderiza a bola na rotação desejada:

    void Game::Render()
    {
    	// Loading is asynchronous. Only draw geometry after it's loaded.
    	if (!m_loadingComplete)
    	{
    		return;
    	}
    
    	auto context = m_deviceResources->GetD3DDeviceContext();
    
    	// Set render targets to the screen.
    	auto rtv = m_deviceResources->GetBackBufferRenderTargetView();
    	auto dsv = m_deviceResources->GetDepthStencilView();
    	ID3D11RenderTargetView *const targets[1] = { rtv };
    	context->OMSetRenderTargets(1, targets, dsv);
    
    	// Draw our scene models.
    	XMMATRIX rotation = XMMatrixRotationY(m_rotation);
    	for (UINT i = 0; i < m_meshModels.size(); i++)
    	{
    		XMMATRIX modelTransform = rotation;
    
    		String^ meshName = ref new String(m_meshModels[i]->Name());
    
    		m_graphics.UpdateMiscConstants(m_miscConstants);
    
    		m_meshModels[i]->Render(m_graphics, modelTransform);
    	}
    }

    ToggleHitEffect não faz nada aqui, a bola não muda o brilho quando for tocada:

    void Game::ToggleHitEffect(String^ object)
    {
    
    }
    
    

    Apesar de não querer que a bola mude o brilho, você ainda pode querer saber quando ela foi tocada. Para isso, use este método OnHitObject modificado:

    String^ Game::OnHitObject(int x, int y)
    {
    	String^ result = nullptr;
    
    	XMFLOAT3 point;
    	XMFLOAT3 dir;
    	m_graphics.GetCamera().GetWorldLine(x, y, &point, &dir);
    
    	XMFLOAT4X4 world;
    	XMMATRIX worldMat = XMMatrixRotationY(m_rotation);
    	XMStoreFloat4x4(&world, worldMat);
    
    	float closestT = FLT_MAX;
    	for (Mesh* m : m_meshModels)
    	{
    		XMFLOAT4X4 meshTransform = world;
    
    		auto name = ref new String(m->Name());
    
    		float t = 0;
    		bool hit = HitTestingHelpers::LineHitTest(*m, &point, &dir, &meshTransform, &t);
    		if (hit && t < closestT)
    		{
    			result = name;
    		}
    	}
    
    	return result;
    }
    
    

    Execute o projeto e verifique que a bola está girando no eixo y. Agora vamos fazer a bola se movimentar.

    Movimentando a Bola

    Para movimentar a bola, você deve aplicar uma translação nela, por exemplo, movendo-a para cima e para baixo. A primeira coisa a fazer é declarar a variável que guarda a posição atual em Game.h:

    class Game
    {
    public:
    	// snip
    private:
           // snip
           float m_translation;

    Então no método Update, calcule a posição atual:

    void Game::Update(DX::StepTimer const& timer)
    {
    	// Rotate scene.
    	m_rotation = static_cast<float>(timer.GetTotalSeconds()) * 0.5f;
    	const float maxHeight = 7.0f;
    	auto totalTime = (float) fmod(timer.GetTotalSeconds(), 2.0f);
    	m_translation = totalTime > 1.0f ?
    		maxHeight - (maxHeight * (totalTime - 1.0f)) : maxHeight *totalTime;
    }

    Desta maneira, a bola vai para cima e para baixo a cada 2 segundos. No primeiro segundo, ela se move para cima e, no seguinte, para baixo. O método Render calcula a matriz resultante e renderiza a bola na nova posição:

    void Game::Render()
    {
    	// snip
    
    	// Draw our scene models.
    	XMMATRIX rotation = XMMatrixRotationY(m_rotation);
    	rotation *= XMMatrixTranslation(0, m_translation, 0);
    
    

    Se você executar o projeto agora, verá que a bola se move para cima e para baixo a uma velocidade constante. Você deve agora adicionar um pouco de física à bola.

    Adicionando Física à Bola

    Para adicionar um pouco de física à bola, você deve simular uma força agindo nela, representando a gravidade. Das suas aulas de física (você se lembra delas, não?), você sabe que um movimento acelerado segue as seguintes equações:

    s = s0 + v0t + 1/2at2

    v = v0 + at

    aonde sé a posição no instante t, s0é a posição inicial, v0é a velocidade inicial e aé a aceleração. Para o movimento vertical, aé a aceleração causada pela gravidade (−10 m/s2) e s0é 0 (o movimento da bola se inicia no piso). Assim, as equações tornam-se:

    s = v0t -5t2

    v = v0 -10t

    Você quer alcançar a altura máxima em 1. Na altura máxima, a velocidade é 0. Assim, a segunda equação permite encontrar a velocidade inicial:

    0 = v0– 10 * 1 => v0 = 10 m/s

    E isto dá a equação de translação da bola:

    s = 10t – 5t2

    Você deve mudar o método Update para usar esta equação:

    void Game::Update(DX::StepTimer const& timer)
    {
    	// Rotate scene.
    	m_rotation = static_cast<float>(timer.GetTotalSeconds()) * 0.5f;
    	auto totalTime = (float) fmod(timer.GetTotalSeconds(), 2.0f);
    	m_translation = 10*totalTime - 5 *totalTime*totalTime;
    }

    Agora que a bola se move de uma maneira mais real, é hora de adicionar o campo de futebol.

    Adicionando o Campo de Futebol

    Para adicionar o campo de futebol, você deve criar uma nova cena. Na pasta Assets, clique com o botão direito para adicionar uma nova cena tridimensional e chame-a de field.fbx. Da barra de ferramentas, adicione um plano e selecione-o, mudando sua escala X para 107 e Z para 60. Configure sua propriedade Texture1 para uma imagem de um campo de futebol. Você pode usar a ferramenta zoom (ou teclar Z) para diminuir o zoom.

    Em seguida, você deve carregar o campo em CreateDeviceDependentResources em Game.cpp:

    void Game::CreateDeviceDependentResources()
    {
    	// snip
    
    	// Load the scene objects.
    	auto loadMeshTask = Mesh::LoadFromFileAsync(
    		m_graphics,
    		L"gamelevel.cmo",
    		L"",
    		L"",
    		m_meshModels)
    		.then([this]()
    	{
    		return Mesh::LoadFromFileAsync(
    			m_graphics,
    			L"field.cmo",
    			L"",
    			L"",
    			m_meshModels,
    			false  // Do not clear the vector of meshes
    			);
    	});
    
    	(loadMeshTask).then([this]()
    	{
    		// Scene is ready to be rendered.
    		m_loadingComplete = true;
    	});
    }
    
    

    Se você executar o programa, verá que o campo pula junto com a bola. Para fazer com que o campo não se mova, altere o método Render:

    // Renders one frame using the Starter Kit helpers.
    void Game::Render()
    {
    	// snip
    
    	for (UINT i = 0; i < m_meshModels.size(); i++)
    	{
    		XMMATRIX modelTransform = rotation;
    
    		String^ meshName = ref new String(m_meshModels[i]->Name());
    
    		m_graphics.UpdateMiscConstants(m_miscConstants);
    
    		if (String::CompareOrdinal(meshName, L"Sphere_Node") == 0)
    			m_meshModels[i]->Render(m_graphics, modelTransform);
    		else
    			m_meshModels[i]->Render(m_graphics, XMMatrixIdentity());
    	}
    }

    Com esta mudança, a transformação é aplicada somente na bola. O campo é renderizado sem transformação. Se você executar o código agora, verá que a bola pula no campo, mas “entra” nele em baixo. Corrija este bug usando uma translação de −0.5 no eixo y. Selecione o campo de futebol e mude sua propriedade Translation no eixo y para −0.5. Agora, quando executar a aplicação, você pode ver a bola pulando no campo, como na Figura 8.


    Figura 8.Bola pulando no campo

    Configurando a Câmera e a Posição da Bola

    A bola está posicionada no centro do campo, mas não é isso o que você quer. Para este jogo, a bola deve estar posicionada na marca do pênalti. Se você olhar no editor de cenas na Figura 9, você pode ver que, para fazer isso, deve efetuar uma translação na bola no eixo x, mudando a posição da bola no método Render em Game.cpp:

    rotation *= XMMatrixTranslation(63.0, m_translation, 0);

    A bola é movida de 63 unidades no eixo x, o que a coloca na marca do pênalti.


    Figura 9.Campo com o eixo X (vermelho) e Z (azul)

    Com esta mudança, você não deve ver mais a bola, pois a câmera está posicionada em outra direção – no meio do campo, olhando para o centro. Você deve reposicionar a câmera, de maneira que ela aponte para a linha do gol, o que pode fazer em CreateWindowSizeDependentResources em Game.cpp:

    m_graphics.GetCamera().SetViewport((UINT) outputSize.Width, (UINT) outputSize.Height);
    m_graphics.GetCamera().SetPosition(XMFLOAT3(25.0f, 10.0f, 0.0f));
    m_graphics.GetCamera().SetLookAt(XMFLOAT3(100.0f, 0.0f, 0.0f));
    float aspectRatio = outputSize.Width / outputSize.Height;
    float fovAngleY = 30.0f * XM_PI / 180.0f;
    
    if (aspectRatio < 1.0f)
    {
    	// Portrait or snap view
    	m_graphics.GetCamera().SetUpVector(XMFLOAT3(1.0f, 0.0f, 0.0f));
    	fovAngleY = 120.0f * XM_PI / 180.0f;
    }
    else
    {
    	// Landscape view.
    	m_graphics.GetCamera().SetUpVector(XMFLOAT3(0.0f, 1.0f, 0.0f));
    }
    m_graphics.GetCamera().SetProjection(fovAngleY, aspectRatio, 1.0f, 100.0f);

    A posição da câmera está entre a marca do meio de campo e a marca do pênalti, olhando para a linha do gol. A nova vista é semelhante à Figura 10.


    Figura 10. Bola reposicionada com a nova posição da câmera

    Agora, você deve adicionar o gol.

    Adicionando os Postes de Gol

    Para adicionar o gol ao campo, você precisa de uma nova cena 3D com o gol. Você pode desenhar seu próprio, ou então obter um modelo pronto. Com o modelo, você deve adicioná-lo à pasta Assets, de maneira que ele possa ser compilado e usado.

    O modelo deve ser carregado no método CreateDeviceDependentResources em Game.cpp:

    auto loadMeshTask = Mesh::LoadFromFileAsync(
    	m_graphics,
    	L"gamelevel.cmo",
    	L"",
    	L"",
    	m_meshModels)
    	.then([this]()
    {
    	return Mesh::LoadFromFileAsync(
    		m_graphics,
    		L"field.cmo",
    		L"",
    		L"",
    		m_meshModels,
    		false  // Do not clear the vector of meshes
    		);
    }).then([this]()
    {
    	return Mesh::LoadFromFileAsync(
    		m_graphics,
    		L"soccer_goal.cmo",
    		L"",
    		L"",
    		m_meshModels,
    		false  // Do not clear the vector of meshes
    		);
    });

    Uma vez carregado, posicione-o e desenhe-o no método Render em Game.cpp:

    auto goalTransform = XMMatrixScaling(2.0f, 2.0f, 2.0f) * XMMatrixRotationY(-XM_PIDIV2)* XMMatrixTranslation(85.5f, -0.5, 0);
    
    for (UINT i = 0; i < m_meshModels.size(); i++)
    {
    	XMMATRIX modelTransform = rotation;
    
    	String^ meshName = ref new String(m_meshModels[i]->Name());
    
    	m_graphics.UpdateMiscConstants(m_miscConstants);
    
    	if (String::CompareOrdinal(meshName, L"Sphere_Node") == 0)
    		m_meshModels[i]->Render(m_graphics, modelTransform);
    	else if (String::CompareOrdinal(meshName, L"Plane_Node") == 0)
    		m_meshModels[i]->Render(m_graphics, XMMatrixIdentity());
    	else
    		m_meshModels[i]->Render(m_graphics, goalTransform);
    }

    Esta mudança aplica uma transformação ao gol e renderiza-o. A transformação é uma combinação de três transformações: uma escala (multiplicando o tamanho original por 2), uma rotação de 90 graus e uma translação de 85.5 unidades no eixo x e -0.5 unidades no eixo y, devido ao deslocamento que foi aplicado ao campo. Desta maneira o gol é posicionado de frente para o campo, na linha do gol, como mostrado na Figura 11. Note que a ordem das transformações é importante: se você aplica a rotação após a translação, o gol será renderizado em uma posição completamente diferente e você não verá nada.


    Figura 11. Campo com o gol posicionado

    Chutando a Bola

    Todos os elementos estão posicionados, mas a bola ainda está pulando. É hora de chutá-la. Para fazer isso, você deve afiar seus conhecimentos de física novamente. O chute da bola parece com algo como a Figura 12.


    Figura 12. Esquema de um chute de bola

    A bola é chutada com uma velocidade inicial v0, com um ângulo α (se você não se lembra de suas aulas de física, jogue um pouco de Angry Birds para ver isso em ação). O movimento da bola pode ser decomposto em dois movimentos diferentes: o movimento horizontal é um movimento com velocidade constante (eu admito que não há efeito de fricção nem efeitos de vento), e o movimento vertical é semelhante ao que foi usado anteriormente. A equação do movimento horizontal é:

    sX = s0 + v0*cos(α)*t

    . . . e a do movimento vertical é:

    sY = s0 + v0*sin(α)*t – ½*g*t2

    Agora você tem duas translações: uma no eixo x e outra no eixo y. Considerando que o chute é a 45 graus, cos(α) = sin(α) = sqrt(2)/2, so v0*cos(α) = v0*sin(α)*t. Você quer que o chute entre no gol, assim a distância deve ser maior que 86 (a linha do gol está a 85.5). Você quer que a bola chegue no gol em 2 segundos, assim quando substitui estes valores na primeira equação, você obtém:

    86 = 63 + v0 * cos(α) * 2 ≥ v0 * cos(α) = 23/2 = 11.5

    Substituindo os valores nas equações, a equação de translação no eixo yé:

    sY = 0 + 11.5 * t – 5 * t2

    . . . e no eixo xé:

    sX = 63 + 11.5 * t

    Com a equação do eixo y, você sabe o tempo que a bola atinge o solo novamente, usando a equação de segundo grau (sim, eu sei que você pensou que nunca usaria ela, mas aí está):

    (−b ± sqrt(b2− 4*a*c))/2*a ≥ (−11.5 ± sqrt(11.52– 4 * −5 * 0)/2 * −5 ≥ 0 or 23/10 ≥ 2.3s

    Com estas equações, você pode substituir a translação para a bola. Inicialmente, em Game.h, crie variáveis para armazenar as translações nos três eixos:

    float m_translationX, m_translationY, m_translationZ;

    Então, no método Update em Game.cpp, adicione as equações:

    void Game::Update(DX::StepTimer const& timer)
    {
    	// Rotate scene.
    	m_rotation = static_cast<float>(timer.GetTotalSeconds()) * 0.5f;
    	auto totalTime = (float) fmod(timer.GetTotalSeconds(), 2.3f);
    	m_translationX = 63.0 + 11.5 * totalTime;
    	m_translationY = 11.5 * totalTime - 5 * totalTime*totalTime;
    }

    O método Render usa as novas translações:

    rotation *= XMMatrixTranslation(m_translationX, m_translationY, 0);

    Se você executar o programa agora, verá o gol com a bola entrando no meio dele. Se quiser que a bola vá para outras direções, você deve adicionar um ângulo horizontal para o chute. Você deve fazer isso com uma translação no eixo z.

    A Figura 13 mostra que a distância entre a marca do pênalti e o gol é de 22.5 e a distância entre os postes do gol é de 14. Isto faz com que α = atan(7/22.5), ou 17 graus. Você poderia calcular a translação no eixo z, mas para fazer isso mais simples, a bola deve chegar na linha do gol ao mesmo tempo que alcança o poste do gol. Isso significa que deve andar 7/22.5 unidades no eixo z enquanto que anda 1 unidade no eixo x. Assim, a equação no eixo zé:

    sz = 11.5 * t/3.2 ≥ sz = 3.6 * t


    Figura 13. Esquema da distância entre o pênalti e o gol

    Esta é a translação para alcançar o gol. Qualquer translação com velocidade menor terá um ângulo menor. Para alcançar o gol, a velocidade no eixo deve estar entre −3.6 (poste esquerdo) e 3.6 (poste direito). Se você considerar que a bola deve entrar totalmente no gol, a distância máxima deve ser 6/22.5, e a faixa de velocidades deve estar entre -3 e 3. Com estes valores, você pode configurar o ângulo de chute com este código no método Update:

    void Game::Update(DX::StepTimer const& timer)
    {
    	// Rotate scene.
    	m_rotation = static_cast<float>(timer.GetTotalSeconds()) * 0.5f;
    	auto totalTime = (float) fmod(timer.GetTotalSeconds(), 2.3f);
    	m_translationX = 63.0 + 11.5 * totalTime;
    	m_translationY = 11.5 * totalTime - 5 * totalTime*totalTime;
    	m_translationZ = 3 * totalTime;
    }

    A translação no eixo z será usada no método Render:

    rotation *= XMMatrixTranslation(m_translationX, m_translationY, m_translationZ);….
    
    

    Você deve ter algo como na Figura 14.


    Figura 14. Chute com um ângulo

    Adicionando um Goleiro

    Com o movimento da bola e gol no lugar, você deve agora adicionar um goleiro para pegar a bola. O goleiro será um cubo distorcido. Na pasta Assets, adicione um novo item – uma nova cena 3D – e chame-o de goalkeeper.fbx.

    Adicione um cubo da barra de ferramentas e selecione-o. Configure sua escala para 0.3 no eixo x, 1.9 no eixo y e 1 no eixo z. Mude sua propriedade MaterialAmbient para 1 em Red e 0 em Blue e Green, para deixa-lo na cor vermelha. Mude a propriedade Red em MaterialSpecular para 1 e MaterialSpecularPower para 0.2.

    Carregue o novo recurso no método CreateDeviceDependentResources:

    auto loadMeshTask = Mesh::LoadFromFileAsync(
    	m_graphics,
    	L"gamelevel.cmo",
    	L"",
    	L"",
    	m_meshModels)
    	.then([this]()
    {
    	return Mesh::LoadFromFileAsync(
    		m_graphics,
    		L"field.cmo",
    		L"",
    		L"",
    		m_meshModels,
    		false  // Do not clear the vector of meshes
    		);
    }).then([this]()
    {
    	return Mesh::LoadFromFileAsync(
    		m_graphics,
    		L"soccer_goal.cmo",
    		L"",
    		L"",
    		m_meshModels,
    		false  // Do not clear the vector of meshes
    		);
    }).then([this]()
    {
    	return Mesh::LoadFromFileAsync(
    		m_graphics,
    		L"goalkeeper.cmo",
    		L"",
    		L"",
    		m_meshModels,
    		false  // Do not clear the vector of meshes
    		);
    });

    O próximo passo é posicionar e renderizar o goleiro no centro do gol. Você deve fazer isso no método Render de Game.cpp:

    void Game::Render()
    {
    	// snip
    
    	auto goalTransform = XMMatrixScaling(2.0f, 2.0f, 2.0f) * XMMatrixRotationY(-XM_PIDIV2)* XMMatrixTranslation(85.5f, -0.5f, 0);
    	auto goalkeeperTransform = XMMatrixTranslation(85.65f, 1.4f, 0);
    
    	for (UINT i = 0; i < m_meshModels.size(); i++)
    	{
    		XMMATRIX modelTransform = rotation;
    
    		String^ meshName = ref new String(m_meshModels[i]->Name());
    
    		m_graphics.UpdateMiscConstants(m_miscConstants);
    
    		if (String::CompareOrdinal(meshName, L"Sphere_Node") == 0)
    			m_meshModels[i]->Render(m_graphics, modelTransform);
    		else if (String::CompareOrdinal(meshName, L"Plane_Node") == 0)
    			m_meshModels[i]->Render(m_graphics, XMMatrixIdentity());
    		else if (String::CompareOrdinal(meshName, L"Cube_Node") == 0)
    			m_meshModels[i]->Render(m_graphics, goalkeeperTransform);
    		else
    			m_meshModels[i]->Render(m_graphics, goalTransform);
    	}
    }

    Com este código, o goleiro é posicionado no centro do gol, como mostrado na Figura 15 (note que a posição da câmera está diferente para a imagem).


    Figura 15. Goleiro no centro do gol

    Agora, você deve fazer o goleiro se movimentar para os lados para pegar a bola. O usuário irá usar as teclas para a direita e para a esquerda para mudar a posição do goleiro.

    O movimento do goleiro é limitado pelos postes do gol, posicionados a +7 e −7 unidades no eixo z. O goleiro tem 1 unidade em ambas direções, assim, ele pode ser movimentado até 6 unidades em cada lado.

    As teclas pressionadas são interceptadas na página XAML (DirectXPage.xaml) e serão redirecionadas para a classe Game. Você deve adicionar um manipulador de eventos KeyDown em DirectXPage.xaml:

    <Page
        x:Class="StarterKit.DirectXPage"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        xmlns:local="using:StarterKit"
        xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
        xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
        mc:Ignorable="d" KeyDown="OnKeyDown">

    O manipulador de eventos em DirectXPage.xaml.cppé:

    void DirectXPage::OnKeyDown(Platform::Object^ sender, Windows::UI::Xaml::Input::KeyRoutedEventArgs^ e)
    {
    	m_main->OnKeyDown(e->Key);
    }

    m_mainé a instância da classe StarterKitMain que renderiza as cenas do jogo e do FPS. Você deve declarar um método público em StarterKitMain.h:

    class StarterKitMain : public DX::IDeviceNotify
    {
    public:
    	StarterKitMain(const std::shared_ptr<DX::DeviceResources>& deviceResources);
    	~StarterKitMain();
    
    	// Public methods passed straight to the Game renderer.
    	Platform::String^ OnHitObject(int x, int y) {
                return m_sceneRenderer->OnHitObject(x, y); }
    	void OnKeyDown(Windows::System::VirtualKey key) {
                m_sceneRenderer->OnKeyDown(key); }….
    
    

    Este método redireciona a tecla para o método OnKeyDown na classe Game. Agora, você deve declarar o método OnKeyDown em Game.h:

    class Game
    {
    public:
    	Game(const std::shared_ptr<DX::DeviceResources>& deviceResources);
    	void CreateDeviceDependentResources();
    	void CreateWindowSizeDependentResources();
    	void ReleaseDeviceDependentResources();
    	void Update(DX::StepTimer const& timer);
    	void Render();
    	void OnKeyDown(Windows::System::VirtualKey key);….
    
    

    Este método processa a tecla pressionada e move o goleiro com as setas. Antes de criar o método, você deve declarar um campo privado em Game.h para armazenar a posição do goleiro:

    class Game
    {
           // snip
    
    private:
    	// snip
    
    	float m_goalkeeperPosition;
    
    

    A posição do goleiro é inicialmente 0 e será incrementada ou decrementada quando o usuário pressiona uma seta. Se a posição for maior que 6 ou menor que -6, a posição do goleiro não será alterada. Você faz isso no método OnKeyDown em Game.cpp:

    void Game::OnKeyDown(Windows::System::VirtualKey key)
    {
    	const float MaxGoalkeeperPosition = 6.0;
    	const float MinGoalkeeperPosition = -6.0;
    	if (key == Windows::System::VirtualKey::Right)
    		m_goalkeeperPosition = m_goalkeeperPosition >= MaxGoalkeeperPosition ?
    	m_goalkeeperPosition : m_goalkeeperPosition + 0.1f;
    	else if (key == Windows::System::VirtualKey::Left)
    		m_goalkeeperPosition = m_goalkeeperPosition <= MinGoalkeeperPosition ?
    	m_goalkeeperPosition : m_goalkeeperPosition - 0.1f;
    }

    A nova posição do goleiro é usada no método Render de Game.cpp, aonde a posição do goleiro é calculada:

    auto goalkeeperTransform = XMMatrixTranslation(85.65f, 1.40f, m_goalkeeperPosition);

    Com estas mudanças, você pode executar o jogo e ver que o goleiro se movimenta para a direita e para a esquerda quando você pressiona as teclas de setas (Figura 16).


    Figura 16. Jogo com o goleiro em posição

    Até agora, a bola se movimenta o tempo todo, mas não é isso que você quer. A bola deveria se mover apenas depois de chutada e parar quando atinge o gol. Da mesma maneira, o goleiro não deve se mexer antes que a bola seja chutada.

    Você deve declarar um campo privado, m_isAnimating em Game.h, para que o jogo saiba que a bola está se movendo:

    class Game
    {
    public:
    	// snip
    
    private:
    	// snip
    	bool m_isAnimating;
    
    

    Esta variável é usada nos métodos Update e Render em Game.cpp para que a bola só se mova quando m_isAnimatingé verdadeiro:

    void Game::Update(DX::StepTimer const& timer)
    {
    	if (m_isAnimating)
    	{
    		m_rotation = static_cast<float>(timer.GetTotalSeconds()) * 0.5f;
    		auto totalTime = (float) fmod(timer.GetTotalSeconds(), 2.3f);
    		m_translationX = 63.0f + 11.5f * totalTime;
    		m_translationY = 11.5f * totalTime - 5.0f * totalTime*totalTime;
    		m_translationZ = 3.0f * totalTime;
    	}
    }
    
    void Game::Render()
    {
    	// snip
    
    	XMMATRIX modelTransform;
    	if (m_isAnimating)
    	{
    		modelTransform = XMMatrixRotationY(m_rotation);
    		modelTransform *= XMMatrixTranslation(m_translationX,
                      m_translationY, m_translationZ);
    	}
    	else
    		modelTransform = XMMatrixTranslation(63.0f, 0.0f, 0.0f);
           ….
    
    

    A variável modelTransform foi colocada no topo do loop. As teclas de setas devem ser processadas no método OnKeyDown somente quando m_isAnimatingé verdadeiro:

    void Game::OnKeyDown(Windows::System::VirtualKey key)
    {
    	const float MaxGoalkeeperPosition = 6.0f;
    
    	if (m_isAnimating)
    	{
    		auto goalKeeperVelocity = key == Windows::System::VirtualKey::Right ?
    			0.1f : -0.1f;
    
    		m_goalkeeperPosition = fabs(m_goalkeeperPosition) >= MaxGoalkeeperPosition ?
    		m_goalkeeperPosition :
    							 m_goalkeeperPosition + goalKeeperVelocity;
    	}
    }

    O passo seguinte é chutar a bola. Isto acontece quando o usuário pressiona a barra de espaço. Declare um novo campo privado, m_isKick, em Game.h:

    class Game
    {
    public:
    	// snip
    
    private:
    	// snip
    	bool m_isKick;
    
    

    Configure este campo para true no método OnKeyDown em Game.cpp:

    void Game::OnKeyDown(Windows::System::VirtualKey key)
    {
    	const float MaxGoalkeeperPosition = 6.0f;
    
    	if (m_isAnimating)
    	{
    		auto goalKeeperVelocity = key == Windows::System::VirtualKey::Right ?
    			0.1f : -0.1f;
    
    		m_goalkeeperPosition = fabs(m_goalkeeperPosition) >= MaxGoalkeeperPosition ?
    		m_goalkeeperPosition :
    							 m_goalkeeperPosition + goalKeeperVelocity;
    	}
    	else if (key == Windows::System::VirtualKey::Space)
    		m_isKick = true;
    }

    Quando m_isKické verdadeiro, a animação é iniciada no Update:

    void Game::Update(DX::StepTimer const& timer)
    {
    	if (m_isKick)
    	{
    		m_startTime = static_cast<float>(timer.GetTotalSeconds());
    		m_isAnimating = true;
    		m_isKick = false;
    	}
    	if (m_isAnimating)
    	{
    		auto totalTime = static_cast<float>(timer.GetTotalSeconds()) - m_startTime;
    		m_rotation = totalTime * 0.5f;
    		m_translationX = 63.0f + 11.5f * totalTime;
    		m_translationY = 11.5f * totalTime - 5.0f * totalTime*totalTime;
    		m_translationZ = 3.0f * totalTime;
    		if (totalTime > 2.3f)
    			ResetGame();
    	}
    }

    A hora inicial do chute é armazenada na variável m_startTime (declarado como um campo privado em Game.h) e é usada para comutar o tempo para o chute. Se for acima de 2.3 segundos, o jogo é reinicializado (a bola deve ter alcançado o gol). Você declara o método ResetGame como privado em Game.h:

    void Game::ResetGame()
    {
    	m_isAnimating = false;
    	m_goalkeeperPosition = 0;
    }

    Este método configure m_isAnimating para false e reinicializa a posição do goleiro. A bola não precisa ser reposicionada, pois ela vai ser desenhada na marca do pênalti se m_isAnimatingé falso. Outra mudança que você deve fazer é o ângulo de chute. Este código fixa o chute próximo ao poste direito:

    m_translationZ = 3.0f * totalTime;

    Você deve mudar isto, de modo que o chute seja aleatório e o usuário não saiba onde será. Você deve declarar um campo privado m_ballAngle em Game.h e inicializá-lo quando a bola for chutada no método Update:

    void Game::Update(DX::StepTimer const& timer)
    {
    	if (m_isKick)
    	{
    		m_startTime = static_cast<float>(timer.GetTotalSeconds());
    		m_isAnimating = true;
    		m_isKick = false;
    		m_ballAngle = (static_cast <float> (rand()) /
    			static_cast <float> (RAND_MAX) -0.5f) * 6.0f;
    	}
    …
    
    

    Rand()/RAND_MAX resulta em um número entre 0 e 1.Subtraia 0.5 do resultado, de modo que o número fique entre -0.5 e 0.5 e multiplique-o por 6, para que o ângulo final fique entre -3 e 3. Para que sejam criadas sequências diferentes a cada jogo, você deve inicializar o gerador, chamando srand no método CreateDeviceDependentResources:

    void Game::CreateDeviceDependentResources()
    {
    	srand(static_cast <unsigned int> (time(0)));
    …
    
    

    Para chamar a função time, você deve incluir <ctime>. Você irá usar m_ballAngle no método Update para configurar o novo ângulo para a bola:

    m_translationZ = m_ballAngle * totalTime;

    A maior parte do código está ok, mas você deve saber se o goleiro pegou a bola ou foi gol. Use um método simples para saber isso: quando a bola chega na linha do gol, você verifica se o retângulo da bola intercepta o retângulo do goleiro. Se quiser, pode usar métodos mais complexos para determinar um gol, mas para nossas necessidades, isto é suficiente. Todos os cálculos são feitos no método Update:

    void Game::Update(DX::StepTimer const& timer)
    {
    	if (m_isKick)
    	{
    		m_startTime = static_cast<float>(timer.GetTotalSeconds());
    		m_isAnimating = true;
    		m_isKick = false;
    		m_isGoal = m_isCaught = false;
    		m_ballAngle = (static_cast <float> (rand()) /
    			static_cast <float> (RAND_MAX) -0.5f) * 6.0f;
    	}
    	if (m_isAnimating)
    	{
    		auto totalTime = static_cast<float>(timer.GetTotalSeconds()) - m_startTime;
    		m_rotation = totalTime * 0.5f;
    		if (!m_isCaught)
    		{
    			// ball traveling
    			m_translationX = 63.0f + 11.5f * totalTime;
    			m_translationY = 11.5f * totalTime - 5.0f * totalTime*totalTime;
    			m_translationZ = m_ballAngle * totalTime;
    		}
    		else
    		{
    			// if ball is caught, position it in the center of the goalkeeper
    			m_translationX = 83.35f;
    			m_translationY = 1.8f;
    			m_translationZ = m_goalkeeperPosition;
    		}
    		if (!m_isGoal && !m_isCaught && m_translationX >= 85.5f)
    		{
    			// ball passed the goal line - goal or caught
    			auto ballMin = m_translationZ - 0.5f + 7.0f;
    			auto ballMax = m_translationZ + 0.5f + 7.0f;
    			auto goalkeeperMin = m_goalkeeperPosition - 1.0f + 7.0f;
    			auto goalkeeperMax = m_goalkeeperPosition + 1.0f + 7.0f;
    			m_isGoal = (goalkeeperMax < ballMin || goalkeeperMin > ballMax);
    			m_isCaught = !m_isGoal;
    		}
    
    		if (totalTime > 2.3f)
    			ResetGame();
    	}
    }

    Declare dois campos privados em Game.h: m_isGoal e m_IsCaught. Estes campos dizem se o foi marcado um gol ou se o goleiro pegou a bola. Se ambos forem falsos, a bola está viajando. Quando a bola chega na linha do gol, o programa calcula os limites da bola e do goleiro e determina se os limites da bola se sobrepõem aos limites do goleiro. Se você olhar no código, verá que eu somei 7.0 para cada limite. Eu fiz isso porque os limites podem ser negativos ou positivos, e isso iria complicar o cálculo. Adicionando 7.0 você se assegura que todos os números são positivos, o que simplifica o cálculo. Se a bola for pega, ela é posicionada no centro do goleiro. m_isGoal e m_IsCaught são reinicializados quando acontece um chute. Agora é hora de adicionar um placar ao jogo.

    Adicionando um Placar

    Em um jogo DirectX, você pode renderizar o placar com Direct2D, mas quando você está desenvolvendo um jogo Windows 8, você tem outra maneira de fazer isso: usando ZAML. Você pode sobrepor elementos XAML no seu jogo e criar uma ponte entre os elementos XAML e sua lógica de jogo. Esta é uma maneira mais fácil de mostrar informações e interagir com o usuário, pois você não terá que lidar com posições de elementos, renderização ou loops de atualização.

    O Starter Kit vem com um placar XAML (aquele usado para marcar os toques nos elementos). Você deve apenas modificá-lo para marcar o placar do jogo. O primeiro passo é mudar DirectXPage.xaml para alterar o placar:

    <SwapChainPanel x:Name="swapChainPanel" Tapped="OnTapped"><Border VerticalAlignment="Top" HorizontalAlignment="Center" Padding="10" Background="Black"
              Opacity="0.7"><StackPanel Orientation="Horizontal"><TextBlock x:Name="ScoreUser" Text="0" Style="{StaticResource HudCounter}"/><TextBlock Text="x" Style="{StaticResource HudCounter}"/><TextBlock x:Name="ScoreMachine" Text="0" Style="{StaticResource HudCounter}"/></StackPanel></Border></SwapChainPanel>

    Enquanto está ali, você pode remover a barra do aplicativo, pois ela não será usada neste jogo. Você removeu todos os contadores de toques no placar, assim, também deve remover o código que menciona eles no manipulador OnTapped em DirectXPage.xaml.cpp:

    void DirectXPage::OnTapped(Object^ sender, TappedRoutedEventArgs^ e)
    {
    
    }
    
    

    Você também pode remover OnPreviousColorPressed, OnNextColorPressed, e ChangeObjectColor das páginas cpp e h porque eles foram usados na barra do aplicativo que você removeu.

    Para atualizar o placar do jogo, deve haver alguma forma de comunicar entre a classe Game e a página XAML. O placar do jogo é atualizado na classe Game, enquanto o placar é mostrado na página XAML. Uma maneira de fazer isso é criar um evento na classe Game, mas esta maneira tem um problema. Se você adiciona um evento para a classe Game, você obtém um erro de compilação: “a WinRT event declaration must occur in a WinRT class.” Isto acontece porque Game não é uma classe WinRT (ref). Para ser uma classe WinRT, você deve defini-la como public ref e selá-la:

    public ref class Game sealed

    Você poderia mudar a classe para fazer isso, mas é melhor ir emoutra direção: criar uma nova classe – neste caso, uma classe WinRT—e usá-la para comunicar entre a classe Game e a página XAML. Crie uma nova classe e chame-a de ViewModel:

    #pragma once
    ref class ViewModel sealed
    {
    public:
    	ViewModel();
    };

    Em ViewModel.h, adicione o evento e as propriedades necessárias para atualizar o placar:

    #pragma once
    namespace StarterKit
    {
    	ref class ViewModel sealed
    	{
    	private:
    		int m_scoreUser;
    		int m_scoreMachine;
    	public:
    		ViewModel();
    		event Windows::Foundation::TypedEventHandler<Object^, Platform::String^>^ PropertyChanged;
    
    		property int ScoreUser
    		{
    			int get()
    			{
    				return m_scoreUser;
    			}
    
    			void set(int value)
    			{
    				if (m_scoreUser != value)
    				{
    					m_scoreUser = value;
    					PropertyChanged(this, L"ScoreUser");
    				}
    			}
    		};
    
    		property int ScoreMachine
    		{
    			int get()
    			{
    				return m_scoreMachine;
    			}
    
    			void set(int value)
    			{
    				if (m_scoreMachine != value)
    				{
    					m_scoreMachine = value;
    					PropertyChanged(this, L"ScoreMachine");
    				}
    			}
    		};
    	};
    
    }
    
    

    Declare um campo privado do tipo ViewModel em Game.h (você deve incluir ViewModel.h em Game.h). Você deve definir também um getter público para este campo:

    class Game
    {
    public:
           // snip
           StarterKit::ViewModel^ GetViewModel();
    private:
    	StarterKit::ViewModel^ m_viewModel;

    Este campo é inicializado no construtor de Game.cpp:

    Game::Game(const std::shared_ptr<DX::DeviceResources>& deviceResources) :
    m_loadingComplete(false),
    m_deviceResources(deviceResources)
    {
    	CreateDeviceDependentResources();
    	CreateWindowSizeDependentResources();
    	m_viewModel = ref new ViewModel();
    }

    O corpo do getter é:

    StarterKit::ViewModel^ Game::GetViewModel()
    {
    	return m_viewModel;
    }

    Quando o chute atual termina, as variáveis são atualizadas em ResetGame em ResetGame in Game.cpp:

    void Game::ResetGame()
    {
    	if (m_isCaught)
    		m_viewModel->ScoreUser++;
    	if (m_isGoal)
    		m_viewModel->ScoreMachine++;
    	m_isAnimating = false;
    	m_goalkeeperPosition = 0;
    }

    Quando uma destas duas propriedades muda, o evento PropertyChangedé levantado, e pode ser manipulado na página XAML. Ainda há um caminho indireto aqui: a página XAML não tem acesso à classe Game (que não é uma classe ref) diretamente mas, ao invés, chama a classe StarterKitMain. Você deve criar um getter para o ViewModel in StarterKitMain.h:

    class StarterKitMain : public DX::IDeviceNotify
    {
    public:
    	// snip
    	StarterKit::ViewModel^ GetViewModel() { return m_sceneRenderer->GetViewModel(); }

    Com esta infraestrutura no lugar, você pode manipular o evento PropertyChanged de ViewModel no construtor de DirectXPage.xaml.cpp:

    DirectXPage::DirectXPage():
    	m_windowVisible(true),
    	m_hitCountCube(0),
    	m_hitCountCylinder(0),
    	m_hitCountCone(0),
    	m_hitCountSphere(0),
    	m_hitCountTeapot(0),
    	m_colorIndex(0)
    {
    	// snip
    
    	m_main = std::unique_ptr<StarterKitMain>(new StarterKitMain(m_deviceResources));
    	m_main->GetViewModel()->PropertyChanged += ref new
               TypedEventHandler<Object^, String^>(this, &DirectXPage::OnPropertyChanged);
    	m_main->StartRenderLoop();
    }
    
    
    
    O manipulador atualiza o placar (você também deve declará-lo em DirectXPage.xaml.cpp.h):
    
    void StarterKit::DirectXPage::OnPropertyChanged(Platform::Object ^sender, Platform::String ^propertyName)
    {
    
    		if (propertyName == "ScoreUser")
    		{
    			auto scoreUser = m_main->GetViewModel()->ScoreUser;
    			Dispatcher->RunAsync(CoreDispatcherPriority::Normal, ref new DispatchedHandler([this, scoreUser]()
    			{
    				ScoreUser->Text = scoreUser.ToString();
    			}));
    		}
    		if (propertyName == "ScoreMachine")
    		{
    			auto scoreMachine= m_main->GetViewModel()->ScoreMachine;
    			Dispatcher->RunAsync(CoreDispatcherPriority::Normal, ref new DispatchedHandler([this, scoreMachine]()
    			{
    				ScoreMachine->Text = scoreMachine.ToString();
    			}));
    		}
    
    }
    
    

    Agora o placar é atualizado a cada vez que acontece um gol ou que o goleiro pega a bola (Figura 17).


    Figura 17. Jogo com atualização do placar

    Usando Toque e Sensores no Jogo

    O jogo está funcionando bem, mas você ainda pode adicionar elegância a ele. Os novos dispositivos Ultrabook™ tem entrada de toque e sensores que você pode usar para aprimorar o jogo. Ao invés de usar o teclado para chutar a bola e mover o goleiro, o usuário pode chutar a bola tocando na tela e mover o goleiro inclinando a tela para a esquerda ou para a direita.

    Para chutar a bola com um toque na tela, use o evento OnTapped em DirectXPage.cpp:

    void DirectXPage::OnTapped(Object^ sender, TappedRoutedEventArgs^ e)
    {
    	m_main->OnKeyDown(VirtualKey::Space);
    }

    O código usa o método OnKeyDown, passando a barra de espaço como parâmetro – da mesma maneira que o usuário tivesse pressionado a barra de espaço. Se quiser, você pode aprimorar o código e obter a posição do toque e somente chutar a bola se o toque for sobre ela. Eu deixo isso para você como lição de casa. Como ponto de partida, o Starter Kit tem código para detector se o usuário tocou um objeto na cena.

    O próximo passo é mover o goleiro quando o usuário inclina a tela. Para isso, você deve usar o inclinômetro, que detecta todo movimento na tela. Este sensor retorna três leituras: pitch, roll, e yaw, correspondendo às rotações nos eixos x, y e z respectivamente. Para este jogo, você usará apenas a leitura roll.

    Para usar este sensor, você deve obter uma instância para ele, o que você pode fazer usando o método GetDefault. Então, você configura o intervalo de informação, com código parecido a este de void Game::CreateDeviceDependentResources in Game.cpp:

    void Game::CreateDeviceDependentResources()
    {
    	m_inclinometer = Windows::Devices::Sensors::Inclinometer::GetDefault();
    	if (m_inclinometer != nullptr)
    	{
    		// Establish the report interval for all scenarios
    		uint32 minReportInterval = m_inclinometer->MinimumReportInterval;
    		uint32 reportInterval = minReportInterval > 16 ? minReportInterval : 16;
    		m_inclinometer->ReportInterval = reportInterval;
    	}
    ...

    m_inclinometeré um campo privado declarado em Game.h. No método Update reposicione o goleiro:

    void Game::Update(DX::StepTimer const& timer)
    {
    	// snip
    		SetGoalkeeperPosition();
    		if (totalTime > 2.3f)
    			ResetGame();
    	}
    }

    SetGoalkeeperPosition reposiciona o goleiro, dependendo da leitura do inclinômetro:

    void StarterKit::Game::SetGoalkeeperPosition()
    {
    
    	if (m_isAnimating && m_inclinometer != nullptr)
    	{
    		Windows::Devices::Sensors::InclinometerReading^ reading =
                        m_inclinometer->GetCurrentReading();
    		auto goalkeeperVelocity = reading->RollDegrees / 100.0f;
    		if (goalkeeperVelocity > 0.3f)
    			goalkeeperVelocity = 0.3f;
    		if (goalkeeperVelocity < -0.3f)
    			goalkeeperVelocity = -0.3f;
    		m_goalkeeperPosition = fabs(m_goalkeeperPosition) >= 6.0f ?
                     m_goalkeeperPosition : m_goalkeeperPosition + goalkeeperVelocity;
    	}
    }

    Com esta mudança, você pode movimentar o goleiro inclinando a tela. Você agora já tem um jogo acabado.

    Medida de Performance

    Com o jogo executando bem em sua máquina de desenvolvimento, você deve testá-lo num dispositivo móvel menos potente. Uma coisa é desenvolver numa máquina potente com um processador gráfico topo-de-linha e 60 FPS. É completamente diferente rodar em um dispositivo com um processador Intel® Atom™ com uma placa gráfica embutida.

    Seu jogo deve executar bem em ambas as máquinas. Para medir a performance, você pode usar as ferramentas incluídas no Visual Studio ou então o Intel® Graphics Performance Analyzers (Intel® GPA), uma suíte de analisadores gráficos que podem detectar gargalos e aumentar a performance de seu jogo. O Intel GPA permite uma análise gráfica de como seu jogo está sendo executado e pode ajudá-lo a fazer com que ele execute mais rápido e melhor.

    Conclusão

    Finalmente, você chegou ao final da sua jornada. Você iniciou com uma chaleira dançante e terminou com um jogo DirectX, com teclado e entradas de sensores. Com as linguagens ficando cada vez mais semelhantes C++/CX não foi tão difícil de usar para um desenvolvedor C#.

    A maior dificuldade é dominar os modelos 3D, faze-los mover e posicioná-los em uma maneira familiar. Para isso você teve que usar alguma física, geometria, trigonometria e matemática.

    Em resumo, desenvolver um jogo não é uma tarefa impossível. Com alguma paciência e as ferramentas certas, você pode criar grandes jogos, com performance excelente.

    Agradecimentos Especiais

    Gostaria de agradecer a Roberto Sonnino pelas suas dicas para o artigo e pela sua revisão técnica.

    Créditos de imagens

    Para mais informações

    Sobre o Autor

    Bruno Sonnino é um Microsoft Most Valuable Professional (MVP) no Brasil. Ele é desenvolvedor, consultor e autor, tendo escrito cinco livros de Delphi, publicados em Português pela Pearson Education Brazil, e muitos artigos em revistas e websites brasileiros e americanos.

     

    Intel®Developer Zone offers tools and how-to information for cross-platform app development, platform and technology information, code samples, and peer expertise to help developers innovate and succeed.  Join our communities for the Internet of Things, Android*, Intel® RealSense™ Technology and Windows* to download tools, access dev kits, share ideas with like-minded developers, and participate in hackathons, contests, roadshows, and local events.

     

    Intel, the Intel logo, Intel Atom, and Ultrabook are trademarks of Intel Corporation in the U.S. and/or other countries.
    *Other names and brands may be claimed as the property of others.
    Copyright © 2014. Intel Corporation. All rights reserved.

     

  • Bruno Sonnino
  • DirectX
  • soccer game
  • Phong effect
  • touch
  • sensor
  • Разработчики
  • Microsoft Windows* 8
  • Windows*
  • C/C++
  • Начинающий
  • Microsoft DirectX*
  • Разработка игр
  • Датчики
  • Сенсорные интерфейсы
  • Опыт пользователя и дизайн
  • Ноутбук
  • Планшетный ПК
  • URL
  • Тема зоны: 

    IDZone

    Fastboot support for Linux on Dell Venue 8

    $
    0
    0

    Hi, can you supply a fastboot binary that works with dell venue tablets? adb works just fine, lsusb even list the tablet in fastboot mode, but the fastboot binary does nothing, of course it can find and communicate with all my other androids via fastboot. Windows fastboot.exe has no problems communicating with the tablet.

    让年轻读者加入英特尔® 感知计算 — 开发面向一体机的 Clifford’s Reading Adventures

    $
    0
    0

    Download PDF

    简介

    本文介绍了如何开发 Clifford’s Reading Adventures—Scholastic Interactive LLC 面向儿童推出的一套交互式教育游戏。 新款游戏中的沉浸式手势和语音体验是通过英特尔® 感知计算 SDK 2013 来实现的。 我们讨论了结合感知计算捕获儿童手势和音质的新方法、针对 SDK 的故障排除策略以及为支持便携式一体机所做的考量。


    图 1. Clifford The Big Red Dog*

    教育游戏概念

    Scholastic Interactive LLC 是 Scholastic 旗下的公司,是一家全球性的儿童出版、教育和媒体公司。 Scholastic Interactive 的目标是让儿童游戏寓教于乐。 Scholastic 对于使用感知计算和手势技术开发儿童教育游戏这一新领域非常感兴趣,因为这种方式非常直观, 儿童无需学习即可操作。 将感知计算平台与其语音和手势技术相结合,加入 Clifford 与他朋友的冒险故事,能够为 3 岁以上的儿童提供一种简单、自然的方式,让他们与故事材料轻松互动。

    在这个由四个交互式单元构成的 Clifford 系列故事中,玩游戏的人可以观看每个冒险故事,使用电脑上的语音和触摸屏功能与故事互动。 故事将请求孩子们通过手势和声控活动用各种方法“帮助” Clifford,从而邀请他们参与其中。


    图 2. Clifford's Reading Adventures 菜单

    交互式体验学习

    Clifford中,Scholastic 发现了向其最年轻的读者提供交互式技术的大好机遇,这些年轻读者能够真实地看到 Clifford 对他们的声音和动作做出反应。 在精彩的故事情节中,儿童观看每个冒险故事的动画片段,通过触摸或口头说出答案来与人物角色及其活动进行积极互动。 儿童在玩采用触摸和手势功能的游戏时还可加快故事情节的进度。 每个游戏基于早期核心读写技能,可根据需要多次重复。

    英特尔感知计算 SDK 2013 可提供 API、样本和教程,为体验游戏的儿童解释使用应用中的手势和声音所需的传感器。 语音识别、近距离手和手指跟踪、面部分析、增强现实和背景减除等 SDK 的核心功能支持软件开发人员将这些功能与最新的平板电脑、英特尔超极本™ 电脑和一体机上的应用快速集成。 使用麦克风、摄像头、触摸屏以及方向和位置功能(现常用与平板电脑、变形笔记本电脑和一体机),开发人员能够构建沉浸式效果更强的应用。


    图 3. 英特尔® 感知计算软件开发套件

    开发团队

    Scholastic 就游戏概念、交互式活动和主要的儿童易用性问题面试了许多开发团队。 最终,Scholastic 选择了与 Symbio合作,因为他们的团队曾开发并实施过手势和语音识别,而且他们在儿童教育、游戏和易用性方面拥有丰富的经验。

    使用英特尔® 感知计算平台进行开发

    根据儿童的动作、手势和声音调整感知计算技术面临几大挑战。 Scholastic 的一般流程是对每种模型进行广泛测试,从而了解游戏设计是否适当以及游戏等级是否能实现。 该测试可帮助团队确认测试玩家(目标年龄儿童)遇到的挑战,这样便可设计和开发适合此年龄段的解决方案。

    在开发阶段中有几个方面需要感知计算开发人员特别注意。 以下是从 Clifford 应用开发中得到的一些要点。

     

    校准语音识别

    语音识别需要多步校验和过滤才能够提供可接受的性能级别。 由于随着儿童的成长其声音会不断发育,尤其是 Clifford系列故事的年轻目标受众,因而应确保对语音识别进行精确校准,以辨别儿童声音和语言模式的细微差别。


    图 4. 使用语音识别功能的游戏屏幕

    验证并锁定手势

    Clifford’s Reading Adventures中的一个游戏要求孩子帮助 Clifford 接住从“玩具树”上掉下来的玩具。 还是用手抓住屏幕上的篮筐,然后向左右来回移动来接玩具。 


    图 5. Clifford 的玩具树

    开发人员添加了算法检查,因而能够验证手势并将其与玩家的手锁定在一起来控制篮筐,因此它能够根据儿童的手势做出移动反应。 在测试阶段期间,年轻的玩家能够高度参与,尽兴游戏,并能很好地对接玩具计时。 在测试之前,在开发实验室中进行实验时,开发人员误认为儿童对接玩具的控制与成人评估者一样。 研究儿童手势的特点为开发人员带来许多学习机会,这要求他们重新思考游戏的设计,以满足低龄儿童手势不够精确这一特征的要求。 儿童的动作较大且通常不太稳定,这带来的动作噪音对于精确捕获是很大的挑战,因为传感器很难识别并解释复杂的多重动作。 其次,建立手势模型并重复操作需要细心和适度才能达到高质量的体验。 适应儿童手势需要扩宽动作捕获区域,这样,即使手势不精确,也可以识别动作并激活所需的响应。

    例如,在另一个迷你游戏中,玩家需要帮助 Clifford 从菜园里拔杂草。 开发人员没有让儿童把手伸到杂草中抓住,然后抬起手来将其拔除,而是切换为攥手/伸手的动作来象征抓草和把草丢掉。 为适应儿童的发育能力和动作而做出的变动让游戏更为成功。


    图 6. 孩子使用手势帮助 Clifford 清除杂草

    以下是游戏教程中用来调整玩家手势的代码,该代码会请求用户移动双手和旋转球体。 在图 7 显示的游戏中应用一些 “//exponential smoothing” 能够支持玩家更好地控制和轻松地运动。  滤波(smoothing)措施可帮助消除或至少减少游戏应忽略的玩家做出的出乎意料的动作。


    图 7:球状物旋转屏幕


    图 8:球状物旋转教程代码示例

    排除英特尔® 感知计算软件开发套件的故障

    使用 SDK 创建的沉浸式体验可支持游戏即时对玩家的动作做出响应。 这让玩家感觉到自己正在亲身实践游戏。 但是,摄像头追踪复杂手势的功能以及识别儿童发出的特定响应的语音检测功能仍有一些限制。

     

    手势

    感知计算的摄像头的捕获范围在距离其约 2-3 英尺的区域。 由于摄像头和对象之间的距离较短,研究人员和开发团队发现,简单的动作和小幅手势比大幅运动或复杂动作有效,后者可能会超出摄像头的范围。

    我们需要通过试错法来确保手势按照需要使用。 开发团队需要注意不同的环境条件、光照以及与摄像头的距离。

    根据 SDK、API 和技术,能够轻松启用基本手势,因为 SDK 中已包括了教程、示例代码和结构。 完成开发环境的设置后,您可以参照一个教程(如手指追踪示例)进行操作,以探索配合 SDK 一起使用的“传感器 - 代码”关系。


    图 9. 英特尔® 感知计算软件开发套件 2013 中的手势“传感器 - 代码”关系

    开发人员发现 SDK 不包含手势所需的各种坐标系。 因此,他们不得不自己找出如何让它们通过试错法。


    图 10:手势坐标图像

    团队最初使用 “node[8].positionImage.x/y” 方法,放弃深度信息,因为实施的手势不需要该信息。 后来,团队发现了一个更好的办法。 他们使用深度图像,搜索最近的像素以帮助有效捕获手势。 然后,他们添加了大量的滤波,以同时改进手势检测。

    语音识别

    游戏的语音识别功能很大程度上受到设备和场景的影响;语音识别在某些设备和情况下能够使用,但在其他设备和情况下却完全无法使用。

    在游戏中,必须提示儿童重复适当的命令,这些命令通过麦克风收集。  我们需要达到准确识别 — 即使有背景噪音和游戏音乐。 语音识别可在语音检测模式下使用,用于检测您的说话内容;也可在词典模式下使用,用于检测说话内容与词典的匹配情况,用户可在游戏中对此进行定义。

    首先,团队尝试了检测模式,并对其进行配置以接收任何可接收的噪音,因为幼童有时吐字不清。 但是,这没有达到预期效果。 然后,团队使用了词典模式,这在清晰的场景,即吐字清楚的情况下效果良好。 团队尝试加入词语变量,以允许接受更多的词 — 即使在吐字不清的情况下 (卖、麦、买、爱)。 但是,词典模式效果不大,因为关键词的数量越多,就越容易出现错误或失配的情况。 开发人员必须在可接受的关键词和潜在错误率之间找到一个平衡点。 在最后的应用中,团队尽量将可接受词语减到最少,以便孩子更简单地交互。

    屏幕尺寸至关重要 — 一体机感知

    随着触摸屏制造能力的提高,市场上出现了更大的屏幕。 屏幕尺寸扩展是 Clifford’s Reading Adventures 要展示的另一领域。 许多大型屏幕集成到一种电脑 — 一体机 (AIO) 中。

    一体机由显示器(18 至 55 英寸)和构建于屏幕后方的主板构成。 它们包括高性能处理器、全高清 (HD) 分辨率 (1080p/720p) 和蓝牙* 无线键盘和鼠标,支持内置高容量电池,因而具备出色的便携性。 AIO 是 PC 中发展最快的类型之一。 其广泛普及的重要原因是 AIO 能够为您提供 PC 可提供的所有功能 — 追踪家庭开支、做作业、玩互动游戏、上网、和朋友聊天以及看电视和电影。

    许多新款 AIO (pAIO) 还能够灵活添加多种功能。 pAIO 设备支持游戏和应用开发人员充分利用大屏幕空间、高性能联网能力和多点触控用户界面 (UI),将全部功能集成到一台纤薄、便携的设备中,并在倾斜和平面两种模式下使用。 内置电池可确保体验的连续性,内置无线网络支持从一个起始位置向其他位置移动。 大型 HD 显示器采用高端图形处理器,并可提供全面的多点触控用户体验 (UX)。 上述特性可帮助开发人员摆脱单用户移动设备的限制。

    Clifford 开发人员看到玩家在大尺寸屏幕上玩自己的游戏感到非常兴奋,因此他们确保其游戏能够在 1920x1080 屏幕分辨率上运行良好。

    总结

    在开发测试过程中,该团队非常开心。 此外,通过对目标受众(儿童)做用户研究,该团队还获益良多。 我们不仅从此结构化测试中得到许多帮助,还从我们的家庭获得了大量帮助,看到家人操作最终设计出来的游戏是我们最大的嘉奖。 一位我们的高级开发人员把它带给其三岁的女儿玩,他告诉开发团队,这款游戏非常吸引她,她非常开心! 胜利!


    图 11:Clifford 和快乐的小伙伴

    Scholastic 团队对于能够在更多游戏中使用该技术感到非常兴奋。 Scholastic 与 Symbio 正在合作开发一款采用英特尔® RealSense™ 3D SDK 的新游戏,并计划于 2014 年秋季推出。


    图 12:该款游戏全面上线

    英特尔® Real-Sense™ 技术

    英特尔® RealSense™ 技术首次发布于 CES 2014,是英特尔® 感知计算技术的新名称和品牌,是一款直观的用户界面 SDK,包括语音识别、手势、手和手指追踪以及面部识别等英特尔于 2013 年推出的功能。 借助英特尔 RealSense 技术,开发人员将会获得额外功能,包括扫描、修改、打印和以 3D 形式共享,以及增强现实接口中的主要优势。  使用这些新功能,用户能够使用高级手感应和手指感应技术自然操作扫描的 3D 对象。

    参考资料和相关链接

    关于作者

    Tim Duncan 是一位英特尔工程师,被朋友们成为“Gidget-Gadget 先生”。 Tim 目前负责帮助开发人员将技术集成至解决方案,他在芯片制造和系统集成领域有着数十年的经验。 如欲对其有更多了解,请访问英特尔® 开发人员专区:Tim Duncan (英特尔)

     

    声明

    Scholastic Interactive LLC 提供的源代码提供了一个模型策略,可帮助在 Windows 8 平台上使用英特尔感知计算技术的应用获得指数平滑(exponential smoothing)功能。 

    Scholastic 示例源代码许可

    版权所有 (c) 2014, Scholastic Interactive LLC。 代码适用于 Clifford’s Reading Adventures 1.0 游戏中包含的指数平滑功能(“样本代码”)。 所有权保留。

    在满足以下条件的情况下,允许以样本代码的源码和二进制形式进行再次分发或使用(作修改或不作修改):

    • 再次分发源码必须保留上述版权通知、此条件列表以及下述免责声明。
    • 若以二进制形式再次分发,必须在文档和/或其他随分发提供的材料中保留上述版权通知、此条件列表以及下述免责声明。
    • 事先未经明确的书面许可,示例代码名称 Clifford’s Reading Adventures 或本文所含的任何名称或商标、版权持有人或其附属机构的名称或示例代码贡献者的名称能够用于对此软件衍生产品的背书、推广或其他任意方式的支持。

    此软件由版权持有者和献助者按“原样”提供,绝不提供其他任何明确或隐含的担保(包括,但不限于,商品适销性和或适用于特定目的适用性的担保)。 在任何情况下,版权持有者或 献助者对使用本软件而以任何方式产生之直接的、间接的、事故性的、特殊的、惩罚性的或后果性的损失(包括,但不限于,购买替代产品或服务,使用、数据或利润的减少,或者业务中断)概不承担责任,不论损失是如何造成的及根据任何责任理论(无论是按合同法、严格责任或侵权 - 包括疏忽或其它 -),即使事先被告知这种损失的可能性。  为避免疑义,本协议下所授予的权利仅限于上文中的特定软件,英特尔不授予该示例代码用户 (A) 构成或嵌入 THE CLIFFORD’S READING ADVENTURES 游戏的任何其他源代码或二进制代码,或其他软件或工具;或者 (B) 版权所有人或其附属机构的任何其他知识产权的任何许可或权利。

    Clifford Artwork © Scholastic Entertainment Inc.  CLIFFORD THE BIG RED DOG 及相关标识是 Norman Bridwell 的商标。  所有权保留。

     

    英特尔、 Intel 标识和 RealSense 是英特尔公司在美国和/或其他国家的商标。
    版权所有 © 2014 年英特尔公司。 保留所有权利。
    *其他的名称和品牌可能是其他所有者的资产。

     

  • All-in-One
  • Gesture Recognition
  • Voice Recognition
  • Разработчики
  • Microsoft Windows* 8
  • Windows*
  • Начинающий
  • Комплект разработки Intel® для перцептивных вычислений
  • «Перцепционные» вычисления
  • Разработка игр
  • Microsoft Windows* 8 Desktop
  • Опыт пользователя и дизайн
  • Ноутбук
  • URL
  • Тема зоны: 

    IDZone

    Rover – A LEGO* Self-Driving Car

    $
    0
    0

    Download PDF

    By Martin Wojtczyk and Devy Tan-Wojtczyk

    1. INTRODUCTION

    This article gives a brief overview of Rover, then focuses on our implementation of the human-robot interface utilizing the Intel® Perceptual Computing SDK for gesture and face detection. For a short introduction to Rover’s features, see the Intel® Developer Zone video from Game Developers Conference 2014 in San Francisco:


    Figure 1: Intel® Developer Zone interview with Rover at Game Developers Conference 2014.

    In comparatively contemporary times robots have either been relegated behind closed doors of large industrial manufacturing plants or demonized in movies such as Terminator where they were depicted as destroyers of the human race. Both stereotypes contribute to creating an unfounded fear in self-operating machines losing control and harming the living. But now, vacuum-cleaning and lawn-mowing robots, among others, are beginning a new trend: service robots as dedicated helpers in shared environments with humans. The miniaturization and cost-effective production of range and localization sensors on the one hand and the ever-increasing compute power of modern processors on the other, enable the creation of smart, sensing robots for domestic use cases.

    In the future, robots will require intelligent interactions with their environment, including adapting to human emotions. State-of-the-art hardware and software, such as the Intel Perceptual Computing SDK paired with the Creative* Interactive Gesture Camera, are paving the way for smarter, connected devices, toys, and domestic helpers [1, 2].

    2. CUBOTIX ROVER

    When Intel announced the Perceptual Computing Challenge in 2013, our team, Devy and Martin Wojtczyk, brainstormed possible use cases utilizing the Intel Perceptual Computing SDK. The combination of a USB-powered camera with an integrated depth sensor and an SDK that enables gesture recognition, face detection, and voice interaction resulted in us building an autonomous, mobile, gesture-controlled and sensing robot called Rover. We were very excited to be selected for an award [3]. Since then, we launched the website http://www.cubotix.com with updates on Rover and are in the process of creating an open hardware community.

    The Cubotix Rover is our attempt to use advanced robotic algorithms to transform off-the-shelf hardware into a smart home robot, capable of learning and understanding unknown environments without prior programming. Instead of unintuitive control panels, the robot is instructed through gestures, natural language, and even facial expressions. Advanced robotic algorithms make Rover location aware and enable it to plan collision-free paths.

    2.1. Gesture Recognition


    Figure 2:Showing a thumbs-up gesture makes Rover happy and mobilizes the robot. Photo courtesy California Academy of Sciences.

    Hand gestures are a common form of communication among humans. Think of the police officer in the middle of a loud intersection in Times Square gesturing the stop sign with his open palm facing approaching traffic.Rover is equipped to recognize, respond to, and act on hand gestures captured through the 3D camera. You can mobilize this robot by gesturing thumbs-up, and in response it will also say “Let’s go!” This robot frowns when you gesture a thumbs-down. Gesturing a high-five renders Rover to crack jokes, such as “If I had arms, I would totally high-five you”. Gesturing a peace sign renders Rover to say “Peace”. These hand gestures and the resulting robotic vocal responses are completely customizable and programmable.


    Figure 3: Showing a thumbs-down gesture stops the robot and makes it sad. Photo courtesy California Academy of Sciences.

    2.2. Facial Recognition

    Facial expression is perhaps the most revealing and honest of all the other means of communication. Recognition of these expressions and being able to respond appropriately or inappropriately can mean the difference between forming a bond or a division with another human being. With artificial intelligence the gap separating machines and humans can begin to close if robots are able to empathize. By capturing facial expressions through the camera, Rover can detect smiles or frowns and respond appropriately. Rover knows when a human has come near it through its facial detection algorithms and can greet them by saying “Hello my name is Rover. What’s your name?”, to which most people have responded just as they would with another human being by saying “Hello I’m ________”. After initiating the conversation, Rover utilizes the Perceptual Computing SDKs face analysis features to distinguish three possible states of the person in front of the camera: happy, sad, or neutral and can respond with an appropriate empathetic expression: “Why are you sad today?” or “Glad to see you happy today!” Moreover the SDKs face recognition allows Rover to learn and distinguish between individuals for a personalized experience.

    3. HARDWARE ARCHITECTURE


    Figure 4:Rover's mobile LEGO* platform. Centrally located with glowing green buttons is the LEGO Mindstorms* EV3 microcontroller, which is connected to the servos that move the base. Also note the support structures and the locking mechanism to mount a laptop.

    Rover uses widely accessible and affordable off-the-shelf hardware that many people may already own and can transform into a smart home robot. It consists of a mobile LEGO platform that carries a depth-camera and a laptop for perception, image processing, path-planning, and human-robot interaction. The LEGO Mindstorms* EV3 set is a great tool for rapid prototyping customized robot models. It includes a microcontroller, sensors, and three servos with encoders, which allow for easy calculation of travelled distances.


    Figure 5:Rover's mobile platform with an attached Creative* Interactive Gesture Camera for gesture recognition, face detection, and 3D perception.

    The Creative Interactive Gesture Camera attached to EV3 contains a QVGA depth sensor and a HD RGB image sensor. The 0.5ft to 3.5ft operating range of the depth sensor allows for 3D perception of objects and obstacles in near range. It is powered solely by the USB port and doesn’t require an additional power supply, which makes it a good fit for mobile use on a robot. Rover’s laptop—an Ultrabook™ with an Intel® Core i7 processor and a touch screen—is mounted on top of the mobile LEGO platform and interfaces the camera and the LEGO microcontroller. The laptop is powerful enough to perform face detection and gesture and speech recognition and to evaluate the depth images in soft real time to steer the robot and avoid obstacles. All depth images and encoder data from the servos are filtered and combined into a map, which serves the robot for indoor localization and collision-free path planning.


    Figure 6:Complete Rover assembly with the mobile LEGO* platform base, the Creative* Interactive Gesture Camera in the front and the laptop attached and locked in place.

    4. SOFTWARE ARCHITECTURE


    Figure 7:Rover's software architecture with most components for perception, a couple of planners, and a few application use cases. All of these building blocks run simultaneously in multiple threads and communicate with each other via messages. The green-tinted components utilize the Intel® Perceptual Computing SDK. All other modules are custom-built.

    Rover’s control software is a multi-threaded application integrating a graphical user interface implemented in the cross-platform application framework Qt, a perception layer utilizing the Intel Perceptual Computing SDK, and custom-built planning, sensing, and hardware interface components. CMake*, a popular open-source build system, is used to find all necessary dependencies, configure the project, and create a Visual Studio* solution on Windows* [4, 5]. The application runs on an Ultrabook laptop running the Windows operating system and mounted directly on the mobile LEGO platform.

    As shown in Figure 7, the application layer has three different use case components: the visible and audible Human-Robot Interface, an Exploration use case that lets Rover explore a new and unknown environment, and a smartphone remote control of the robot. The planning layer includes a collision-free path planner based on a learned map and a task planner that decides for the robot to move, explore, and interact with the user. A larger number of components form the perception layer, which is common for service robots as they have to sense their often unknown environments and respond safely to unexpected changes. Simultaneous Localization and Mapping (SLAM) and Obstacle Detection are custom-built and based on the depth images from the Perceptual Computing SDK, which also provides the functionality for gesture recognition, face detection, and speech recognition.

    The following sections briefly cover the Human-Robot Interface and describe in more detail the implementation of gesture recognition and face detection for the robot.

    4.1. User Interface

    The human-robot interface of Rover is implemented as a Qt5 application [6]. Qt includes tools for window and widget creation and commonly used features, such as threads and futures for concurrent computations. The main window depicts a stylized face consisting of two buttons: for the robot’s eyes and mouth. Depending on the robot’s mood the mouth forms a smile or a frown. When nobody interacts with the robot, it goes to sleep. When it detects a person in front of it, it wakes up and responds to gestures, which trigger actions. The robot’s main program launches several different threads for the detection of different Intel Perceptual Computing features. It utilizes Qt’s central signal/slot mechanism for communication between objects and threads [7]. Qt’s implementation of future classes is utilized whenever the robot speaks for asynchronous speech output [8].

    4.2. Perception

    The robot’s perception relies on the camera featuring a color and a depth sensor. The camera is interfaced through the SDK, which enables applications to easily integrate gesture, face, and speech recognition, as well speech synthesis.

    4.2.1. Gesture Recognition

    Simple, easy-to-learn hand gestures, which are realized utilizing the SDK, trigger most of Rover’s actions. When a person shows a thumbs-up gesture, the robot will look happy, say “Let’s go!” and can start autonomous driving or another configured action. When the robot is shown a thumbs-down gesture, it will put on a sad face, vocalize its unhappiness, and stop mobile activities in its default configuration. When showing the robot a high-five, it will crack a joke. Rover responds to all of the SDK’s default gestures, but here we will just focus on these three: thumbs-up, thumbs-down, and high-five.

    Rover’s gesture recognition is implemented in a class GesturePipeline, which runs in a separate thread and is based on the class UtilPipeline out of the convenience library pxcutils in the SDK and QObject from the Qt framework. GesturePipeline implements the two virtual UtilPipeline functions OnGesture() and OnNewFrame() and emits a signal for each recognized gesture. The class also implements the two slots work() and cleanup(), which are required to move the pipeline into its own QThread. Therefore, the declaration of GesturePipeline is very simple and similar to the provided gesture sample [9, 10]:

    
    #ifndef GESTUREPIPELINE_H
    #define GESTUREPIPELINE_H
    
    #include
    #include "util_pipeline.h"
    
    class GesturePipeline : public QObject, public UtilPipeline
    {
    	Q_OBJECT
    
    public:
    	GesturePipeline();
    	virtual ~GesturePipeline();
    
    	virtual void PXCAPI OnGesture(PXCGesture::Gesture *data);
    	virtual bool OnNewFrame();
    
    protected:
    	PXCGesture::Gesture m_gdata;
    
    signals:
    	void gesturePoseThumbUp();
    	void gesturePoseThumbDown();
    	void gesturePoseBig5();
    	// ... further gesture signals
    
    public slots:
    	void work();
    	void cleanup();
    };
    
    #endif /* GESTUREPIPELINE_H */
    
    
    
    

    Listing: GesturePipeline.h

    Besides the empty default constructor and destructor, implementation in GesturePipeline.cpp is limited to the four methods mentioned above. The method work() is executed when the pipeline thread is started as a QThread object. It enables gesture processing from within UtilPipeline and runs its LoopFrames() method to process the camera’s images and recognize gestures in subsequent image frames. The implementation of work() is as follows:

    
    void GesturePipeline::work()
    {
    	EnableGesture();
    	if (!LoopFrames()) wprintf_s(L"Failed to initialize or stream
    		data");
    };

    Listing: GesturePipeline.cpp – work()

    The method cleanup() is called when the GesturePipeline thread is terminated. In this case it does nothing and is implemented as an empty function.

    Once started via LoopFrames(), UtilPipeline calls OnNewFrame() for every acquired image frame. To continue processing and recognizing gestures, this function returns true on every call.

    
    bool GesturePipeline::OnNewFrame()
    {
    	return true;
    };
    
    
    
    

    Listing: GesturePipeline.cpp – OnNewFrame()

    OnGesture() is called from UtilPipeline when a gesture is recognized. It queries the data parameter for activated gesture labels and emits an appropriate Qt signal.

    
    void PXCAPI GesturePipeline::OnGesture(PXCGesture::Gesture *data)
    {
    	if (data->active)
    	{
    		switch (data->label)
    		{
    		case PXCGesture::Gesture::LABEL_POSE_THUMB_UP:
    			emit gesturePoseThumbUp();
    			break;
    
    		case PXCGesture::Gesture::LABEL_POSE_THUMB_DOWN:
    			emit gesturePoseThumbDown();
    			break;
    
    		case PXCGesture::Gesture::LABEL_POSE_BIG5:
    			emit gesturePoseBig5();
    			break;
    		// ... further gestures
    		}
    	}
    };
    
    
    
    

    Listing: GesturePipeline.cpp – OnGesture()

    The emitted Qt signals would have little effect if they weren’t connected to appropriate slots of the application’s main control thread MainWindowCtrl. Therefore, it declares slots for each signal and implements the robot’s activities.

    
    class MainWindowCtrl :public QObject
    {
    	Q_OBJECT
    
    public slots:
    	void gesturePoseThumbUp();
    	void gesturePoseThumbDown();
    	void gesturePoseBig5();
    	// ... further gesture slots
    
    
    
    

    Listing: MainWindowCtrl.h snippet declaration of gesture slots.

    The implementation of the actions triggered by the abovementioned gestures is fairly simple. The robot’s state variable is switched to RUNNING or STOPPED, and the robot’s mood is switched between HAPPY and SAD. Voice feedback is assigned accordingly and spoken asynchronously via SpeakAsync, a method utilizing the QFuture class of the Qt framework for asynchronous computation.

    
    void MainWindowCtrl::gesturePoseThumbUp()
    {
    	std::wstring sentence(L"Let's go!");
    	SpeakAsync(sentence);
    
    	mood = HAPPY;
    	state = RUNNING;
    	stateChange();
    };
    
    void MainWindowCtrl::gesturePoseThumbDown()
    {
    	std::wstring sentence(L"Aww");
    	SpeakAsync(sentence);
    
    	mood = SAD;
    	state = STOPPED;
    	stateChange();
    };
    
    void MainWindowCtrl::gesturePoseBig5()
    {
    	std::wstring sentence(L"I would totally high five you, if I
    		had arms.");
    	SpeakAsync(sentence);
    
    	mood = HAPPY;
    	state = STOPPED
    	stateChange();
    };
    
    
    
    

    Listing: MainWindowCtrl.cpp – gesture slot implementation.

    The only missing piece between the signals of GesturePipeline and the slots of MainWindowCtrl is the setup procedure implemented in a QApplication object, which creates the GesturePipeline thread and the MainWindowCtrl object and connects the signals to the slots. The following listing shows how to create a QThread object, move the GesturePipeline to that thread, connect the thread’s start/stop signals to the pipeline’s work()/cleanup() methods and the gesture signals to the appropriate slots of the main thread.

    
    	// create the gesture pipeline worker thread
    	gesturePipeline = new GesturePipeline;
    	gesturePipelineThread = new QThread(this);
    	// connect the signals from the thread to the worker
    	connect(gesturePipelineThread, SIGNAL(started()),
    		gesturePipeline, SLOT(work()));
    	connect(gesturePipelineThread, SIGNAL(finished()),
    		gesturePipeline, SLOT(cleanup()));
    	gesturePipeline->moveToThread(gesturePipelineThread);
    	// Start event loop and emit Thread->started()
    	gesturePipelineThread->start();
    	// connect gestures from pipeline to mainWindowCtrl
    	connect(gesturePipeline, SIGNAL(gesturePoseThumbUp()),
    		mainWindowCtrl, SLOT(gesturePoseThumbUp()));
    	connect(gesturePipeline, SIGNAL(gesturePoseThumbDown()),
    		mainWindowCtrl, SLOT(gesturePoseThumbDown()));
    	connect(gesturePipeline, SIGNAL(gesturePoseBig5()),
    		mainWindowCtrl, SLOT(gesturePoseBig5()));
    	// ... further gestures

    Listing: Application.cpp – gesture setup

    4.2.2. Face Detection

    When Rover stands still and nobody interacts with it, it closes its eyes and goes to sleep. However, when a person shows up in front of the robot, Rover will wake up and greet them. This functionality is realized using the SDK’s face detector.

    Face detection is implemented in a class FacePipeline that is structured very similar to GesturePipeline and is based on the Face Detection sample in the SDK’s documentation [11]. It runs in a separate thread and is derived from the classes UtilPipeline and QObject. FacePipeline implements the virtual UtilPipeline functions OnNewFrame() and emits a signal when at least one face is detected in the frame and a signal if no face is detected in the frame. It also implements the two slots work() and cleanup(), which are required to move the pipeline into its own QThread. Following is the declaration of FacePipeline:

    
    #ifndef FACEPIPELINE_H
    #define FACEPIPELINE_H
    
    #include
    #include "util_pipeline.h"
    
    class FacePipeline : public QObject, public UtilPipeline
    {
    	Q_OBJECT
    
    public:
    	FacePipeline();
    	virtual ~FacePipeline();
    
    	virtual bool OnNewFrame();
    
    signals:
    	void faceDetected();
    	void noFaceDetected();
    
    public slots:
    	void work();
    	void cleanup();
    };
    
    #endif /* FACEPIPELINE_H */
    
    
    
    

    Listing: FacePipeline.h

    The constructor, destructor, and the cleanup() methods are empty. The method work() calls LoopFrames() and starts UtilPipeline.

    
    void FacePipeline::work()
    {
    	if (!LoopFrames()) wprintf_s(L"Failed to initialize or stream
    		data");
    };

    Listing: FacePipeline.cpp – work()

    The method OnNewFrame is called by UtilPipeline for every acquired frame. It queries the face analyzer module of the Intel Perceptual Computing SDK, counts the number of detected faces, and emits the appropriate signals.

    
    bool FacePipeline::OnNewFrame()
    {
    	// query the face detector
    	PXCFaceAnalysis* faceAnalyzer = QueryFace();
    	// loop all faces
    	int faces = 0;
    	for (int fidx = 0; ; fidx++)
    	{
    		pxcUID fid = 0;
    		pxcU64 timeStamp = 0;
    		pxcStatus sts = faceAnalyzer->QueryFace(fidx, &fid,&timeStamp);
    		if (sts < PXC_STATUS_NO_ERROR) // no more faces
    			break;
    		else
    			faces++;
    	};
    	if (faces > 0)
    		emit faceDetected();
    	else
    		emit noFaceDetected();
    
    	return true;
    };
    
    
    
    

    Listing: FacePipeline.cpp – OnNewFrame()

    Respective slots for the face detector are declared in the application’s main control thread:

    
    class MainWindowCtrl :public QObject
    {
    	Q_OBJECT
    
    public slots:
    	void faceDetected();
    	void noFaceDetected();
    
    
    
    

    Listing: MainWindowCtrl.h – declaration of face detector slots.

    The implementation of the face detector slots update the robot’s sleep/awake state, its mood, and its program state. When no face is detected, a timer is launched that puts the robot to sleep unless the robot is carrying out a task. This renders the methods easy to implement.

    
    void MainWindowCtrl::faceDetected()
    {
    	// only transition to the next step, when the program is at
    	// START
    	if (state == START)
    	{
    		awake = AWAKE;
    		mood = HAPPY;
    		state = FACE_DETECTED;
    		stateChange();
    	};
    };
    
    void MainWindowCtrl::noFaceDetected()
    {
    	if ((state != START) && (state != RUNNING))
    	{
    		startOrContinueAwakeTimeout();
    		if (awakeTimeout)
    		{
    			awake = ASLEEP;
    			mood = HAPPY;
    			state = START;
    			stateChange();
    		};
    	};
    };

    Listing: MainWindowCtrl.cpp – face detector slot implementation.

    Similar to the gesture recognizer, the main application creates the FacePipeline object, moves it into a Qt thread to run concurrently, and connects the face detector signals to the appropriate slots of the main control thread.

    
    	// create the face pipeline worker thread
    	facePipeline = new FacePipeline;
    	facePipelineThread = new QThread(this);
    	// connect the signals from the thread to the worker
    	connect(facePipelineThread, SIGNAL(started()), facePipeline,
    		SLOT(work()));
    	connect(facePipelineThread, SIGNAL(finished()), facePipeline,
    		SLOT(cleanup()));
    	facePipeline->moveToThread(facePipelineThread);
    	// Start event loop and emit Thread->started()
    	facePipelineThread->start();
    	// connect events from face pipeline to mainWindowCtrl
    	connect(facePipeline, SIGNAL(faceDetected()), mainWindowCtrl,
    		SLOT(faceDetected()));
    	connect(facePipeline, SIGNAL(noFaceDetected()),
    		mainWindowCtrl, SLOT(noFaceDetected()));

    Listing: Application.cpp – face detector setup.

    5. RESULTS

    Based on our observations at recent exhibitions in the U.S. and Europe, including Mobile World Congress, Maker Faire, CeBIT, California Academy of Science, Robot Block Party, and Game Developer’s Conference, people are ready and excited to try interacting with a robot. Google’s official plunge into the world of artificial intelligence and robotics has inspired the general public to look deeper and pay attention to the future of robotics.


    Figure 8: Rover at Mobile World Congress surrounded by a group of people.

    Fear and apprehension has been replaced by curiosity and enthusiasm. Controlling a machine has predominantly been done through dedicated hardware, unintuitive control panels, and workstations. That boundary is dissolving now as humans can communicate with machines through natural, instinctual interactions thanks to advancing developments that allow localization and mapping and gesture and facial recognition. Visitors are astounded when they see they can control an autonomous mobile robot through hand gestures and facial expressions utilizing the Ultrabook, Intel Perceptual Computing SDK, and Creative Interactive Gesture Camera. We have encountered these responses across a very wide spectrum of people—young and old, man and woman, domestic and international.

    6. OUTLOOK

    Unlike many consumer robots on the market today, Rover is capable of mapping out its environment without any external hardware like a remote control. It can independently localize specific rooms in a home like the kitchen, bathroom, and bedroom. If you’re at the office and need to check up on a sick child at home, you can simply command Rover to go to a specific room in your house without manually navigating it. Resembling a human, this robot has short- and long-term memory. Its long-term memory is stored in the form of a map that allows it to move independently. It can recognize and therefore maneuver around furniture, corners, and other architectural boundaries. Its short-term memory is capable of recognizing an object that unpredictably darts in front of the robot, prompting it to stop until the 3D camera no longer detects any obstacles in its path. We are looking forward to sharing further details about robot localization, mapping, and path-planning in future articles.

    We see the potential for widespread use and adoption of Perceptual Computing technology is vast. Professions and industries that embody the “human touch” from healthcare to hospitality may reap the most benefits from Perceptual Computing technology. Fundamentally, as human beings we all seek to understand and be understood, and the best technologies are those that make life easier, more efficient, or enhanced in an impactful way. Simultaneous localization and mapping and gesture and facial recognition all working together blur the lines between humanity and machines, bringing us closer to the robots that can inhabit our realities and imaginations.

    7. ABOUT THE AUTHORS


    Figure 9: Devy and Martin Wojtczyk with Rover.

    Devy Tan-Wojtczyk is co-founder of Cubotix. She brings over 10 years of business consulting experience with clients from UCLA, GE, Vodafone, Blue Cross of California, Roche, Cooking.com, and New York City Department for the Aging. She holds a BA in International Development Studies from UCLA and an MSW with a focus on Aging from Columbia University. For fun one weekend she led a newly formed cross-functional team consisting of an idea generator, two developers, and a designer in business and marketing efforts at the 48-hour HP Intel Social Good Hackathon, which resulted in a cash award in recognition of technology, innovation, and social impact. Devy was also competitively selected to attend Y Combinator's first ever Female Founders Conference.

    Martin Wojtczyk is an award-winning software engineer and technology enthusiast. With his wife Devy he founded Cubotix http://www.cubotix.com, a DIY community, creating smart and affordable service robots for everybody. He graduated in computer science and earned his PhD (Dr. rer. nat.) in robotics from Technical University of Munich (TUM) in Germany after years of research in the R&D department of Bayer HealthCare in Berkeley. Speaking engagements include Google DevFest West, Mobile World Congress, Maker Faire, and many others in the international software engineering and robotics community. In the past 10 years he developed the full software stack for several industrial autonomous mobile service robots. He won multiple awards in global programming competitions, was recently featured on Makezine.com, and recognized as an Intel Software Innovator.

    8. RELATED CONTENT

    [1] Intel Perceptual Computing SDK: https://software.intel.com/en-us/vcsource/tools/perceptual-computing-sdk/home

    [2] Creative Interactive Gesture Camera Kit: http://click.intel.com/creative-interactive-gesture-camera-developer-kit.html

    [3] Intel Perceptual Computing Showcase – Rover – A LEGO Self-Driving Car: https://software.intel.com/sites/campaigns/perceptualshowcase/lego-self-driving-car.htm

    [4] CMake – http://cmake.org

    [5] Martin Wojtczyk and Alois Knoll. A cross platform development workflow for C/C++ applications. In Herwig Mannaert, Tadashi Ohta, Cosmin Dini, and Robert Pellerin, editors, Software Engineering Advances, 2008. ICSEA ’08. The Third International Conference, 224-9, Sliema, Malta, October 2008. IEEE Computer Society.

    [6] Qt Project: http://qt-project.org

    [7] Qt Project Documentation – Signals & Slots: http://qt-project.org/doc/qt-5/signalsandslots.html

    [8] Qt Project Documentation – QFuture Class: http://qt-project.org/doc/qt-5/qfuture.html

    [9] Intel Perceptual Computing SDK Documentation – UtilPipeline: https://software.intel.com/sites/landingpage/perceptual_computing/documentation/html/index.html?utilpipeline.html

    [10] Intel Perceptual Computing SDK Documentation – Add Gesture Control: https://software.intel.com/sites/landingpage/perceptual_computing/documentation/html/index.html?tuthavok_add_gesture_control.html

    [11] Intel Perceptual Computing SDK Documentation – Code Walkthrough Of Face Detection Sample: https://software.intel.com/sites/landingpage/perceptual_computing/documentation/html/index.html?tutface_code_explanation.html

    Intel® Real-Sense™ Technology

    First announced at CES 2014, Intel® RealSense™ technology is the new name and brand for what was Intel® Perceptual Computing technology, the intuitive user interface SDK with functions like speech recognition, gesture, hand and finger tracking, and facial recognition that Intel introduced in 2013. Intel RealSense Technology gives developers additional features including scanning, modifying, printing, and sharing in 3D plus major advances in augmented reality interfaces. With these new features, users can naturally manipulate scanned 3D objects using advanced hand- and finger-sensing technology.

  • Cubotix Rover
  • Rover
  • Intel® Perceptual Computing SDK
  • Gesture Recognition
  • Face Recognition
  • Разработчики
  • Microsoft Windows* 8
  • Windows*
  • Qt*/QML
  • Начинающий
  • Комплект разработки Intel® для перцептивных вычислений
  • «Перцепционные» вычисления
  • Опыт пользователя и дизайн
  • Ноутбук
  • URL
  • Тема зоны: 

    IDZone

    Legend of Xuan Yuan Case Study: Get the best gameplay with 2 in 1 state, touch, and accelerometer

    $
    0
    0

    Download PDF

    Abstract

    Tencent wanted to give gamers the best experience on Intel® Ultrabook™ and 2 in 1 systems. Legend of Xuan Yuan was already a successful game, but these systems provided Tencent with a new opportunity. Many systems currently provide 2 in 1 usage, meaning they can be handled as a traditional laptop or a tablet. Tencent worked with Intel engineers to detect the laptop and tablet modes to change the game’s state. They updated the UI to support touch, which has become one of the most essential and exciting features on tablets. Finally, the system’s sensors allowed new gameplay by including “shake” to enable a special action in the game.

    Introducing the first touch 3D MMOPRG for the Chinese market

    Tencent is the biggest game developer in China. With a growing number of 2 in 1 systems in the Chinese market, Tencent wanted to give their players a unique experience. After two years in the market, Legend of Xuan Yuan was already a popular title. The availability of Ultrabooks and 2 in 1 systems made it the right time to add touch and accelerometer support to the game.  Although 3D MMORPGs are very popular in China, none of them supported touch before Legend of Xuan Yuan. Tencent had a chance to innovate, but there was also risk – would the changes be successful? This case study illustrates how, working with Intel engineers, Tencent changed the game to play well on 2 in 1 systems and Ultrabooks running Windows* 8.

    Legend of Xuan Yuan needs two different UIs for tablet and laptop modes. On 2 in 1 systems, the game detects when the system is used as a laptop versus a tablet. The game uses keyboard and mouse when the system is in laptop mode. When it’s used as a tablet, the game switches to a touch-only UI. Tencent wanted an effortless transition between the traditional laptop mode and touch gameplay. The player has a seamless experience because the UI changes automatically to suit each mode. In this case study, we’ll look at how to detect the mode of a 2 in 1 system and change the UI based on that mode.

    Converting an existing user interface to touch can be difficult. It’s especially hard for games with rich UIs that rely on left-click, right-click, and multiple command keys. There’s no single formula for adapting this kind of UI. It requires great care to deliver smooth and satisfying gameplay via touch. Because the game had an existing installed base, the team was careful to make the smallest changes possible and not alienate existing players. We’ll review the UI design.

    Since these systems include an accelerometer, Tencent also added support for a “super-kill” attack against opponents when you shake the system during gameplay.

    Changing game mode to match the 2 in 1 state

    Legend of Xuan Yuan has two UI personalities and dynamically changes the UI based on the state of a 2 in 1 system. When the system is in laptop mode, Legend of Xuan Yuan plays as it always has with keyboard and mouse input. When the system is in tablet mode, the player uses touch input. How does this work?

    Here’s how we did it: Detecting 2 in 1 state changes and changing UI mode

    Legend of Xuan Yuan listens for the WM_SETTINGCHANGE message. This message notifies apps when the system changes state. The WM_SETTINGCHANGE message comes with its LPARAM pointing to a string with a value of “ConvertibleSlateMode” when the 2 in 1 state changes. A call to GetSystemMetrics(SM_CONVERTIBLESLATEMODE) reveals the current state.

    When the game is in tablet mode, it displays an overlay UI with touch buttons for the various UI actions. It hides the overlay UI in laptop mode.

    Legend of Xuan Yuan uses detection code like this:

    LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam)
    {
      case WM_SETTINGCHANGE:
        if (lParam != NULL && wcscmp(TEXT("ConvertibleSlateMode"), (TCHAR *)lParam) == 0)
        {
          BOOL bSlateMode = (GetSystemMetrics(SM_CONVERTIBLESLATEMODE) == 0);
          if (bSlateMode == True) //Slate mode, display the Touch overlay UI
            …
          else //Laptop mode, hide the touch overlay UI
            …
        }
    

    Figure 1: Code to detect system setting change and check system mode

    For more details, check the basic 2 in 1 aware sample.

    This technique must be enabled by the system OEM, with a supporting driver installed, in order to work. In case it’s not properly enabled on some systems, we included a manual way to change the UI configuration. A later section describes how the UI works.

    Now it’s your turn: Detect if the system is used as a laptop or tablet

    How can you detect the system state for your game and how should it change its UI? To play best on 2 in 1 systems, your game should have dual personalities and dynamically change its UI based on the state of the system.

    First plan a UI for both laptop and tablet modes. Consider how the system might be held or placed. Pick the UI interactions that work best for your players. As you design the touch interface, you should reserve more screen area for your touch controls than you typically need for mouse buttons. Otherwise, players will struggle to reliably press the touch controls.

    Touch interactions are often slower than keyboard and mouse, so keep that in mind too. Game menus also need both a keyboard plus mouse UI and a touch UI.

    It’s a good idea to check the system state at startup with GetSystemMetrics and set your UI accordingly. Remember that not all systems will correctly report their state or notify your game of state changes, so choose a default startup state for your game’s UI in case the state isn’t detected.

    Listen for the WM_SETTINGCHANGE message once the game is running. When the message arrives, check its contents for an LPARAM pointing to a string with a value of “ConvertibleSlateMode”. That value indicates that the game should call GetSystemMetrics(SM_CONVERTIBLESLATEMODE) and check if the UI should change.

    Detection may not be conclusive because all systems may not correctly report state changes. Your game should probably default to laptop mode if the detection doesn’t give certain results. It should definitely include a way to manually change the UI between keyboard/mouse mode and tablet mode.

    For a complete sample that detects system state and changes its UI, look at the 2 in 1 aware sample. To see a more complex sample that detects docking, screen orientation, and more, check the detecting state sample.

    Deciding on a touch message type

    You’ll need to decide which Window message type to support before you can add touch support to an existing application. Choose one of the three different sets of Window messages: WM_POINTER, WM_GESTURE, or WM_TOUCH. We’ll walk through the decision process used for Legend of Xuan Yuan and examine ways you can do the same for your game.

    How we did it: Comparing touch message types

    Touch support is at the center of the new version of Legend of Xuan Yuan. When players use the touch screen, they see a new UI with a set of touch controls on screen.

    WM_POINTER is the easiest message type to code, and it supports a rich set of gestures. WM_POINTER only runs on Windows 8 and beyond. Tencent wanted to support a large installed base of Windows 7 players, so WM_POINTER was not the right choice.

    Before we discuss the remaining message types, let’s review the key UI elements for Legend of Xuan Yuan. The game’s touch UI uses on-screen button controls. These controls can be used for movement and actions at the same time. The movement and action controls are on opposite sides of the screen, for use with two hands. These controls are in the bottom corners of the screen. There’s also an icon near the top of the screen to bring up a cascading menu for more complex UI elements. We’ll discuss the design of the UI later, but this gives us a good idea how the UI elements must work.


    Figure 2: On-screen touch overlay UI, in left and right bottom corners

    The game must recognize simultaneous points of contact from different parts of the screen. Because multiple touches must work at the same time, we refer to this as multi-touch.

    Now that we understand the main parts of the multi-touch UI, we can compare the remaining touch message types: WM_GESTURE and WM_TOUCH. The easiest one to code is WM_GESTURE, which has simple support for typical gestures like a two-finger pinch (zoom) and finger swipe (pan). This message type hides some of the detail of touch interaction and presents your code with a complete gesture once the gesture is done. Simple touch events are still sent to your game as mouse messages. This means a typical touch interface could be implemented using mouse messages for simple touch events plus WM_GESTURE for complex gestures.

    The gestures supported by WM_GESTURE can only include one set of related touch points. This makes it difficult to support gestures from this kind of multi-touch UI where the player touches the screen in different places. WM_GESTURE is a poor choice for this game.

    WM_TOUCH is the lowest-level touch message type. It gives complete access to all touch events (e.g., “finger down”). WM_TOUCH requires you to do more work than the other message types since you must write code to represent all high-level touch events and gestures out of low-level touch messages. In spite of the extra work required, WM_TOUCH was the clear choice for Legend of Xuan Yuan. WM_TOUCH gave complete control over all touch interaction including multi-touch.

    When there’s a physical touch on the screen, the system sends WM_TOUCH messages to the game. The game also receives a mouse click message at the same time. This makes it possible for apps without full touch support to behave properly with touch. Because these two messages of different types describe the same physical event, this can complicate the message handling code. Legend of Xuan Yuan uses mouse-click messages where possible and discards duplicate messages.

    Your turn: Choosing the right touch message type for your game

    WM_POINTER is a great option if your game will only be used on Windows 8. If you need backward compatibility, look at both WM_GESTURE and WM_TOUCH messages.

    Consider your UI design as you compare the message types. If your UI relies heavily on gestures, and you can easily write mouse-click handlers for the non-gesture single touch events, then WM_GESTURE is probably right for your game. Otherwise, use WM_TOUCH. Most games with a full-featured UI use WM_TOUCH, especially when they have multiple controls that players will touch at the same time.

    When you evaluate the touch messages, don’t forget the menu system. Remember also to discard extra messages that arrive as mouse clicks.

    To learn more about the tradeoffs between the three message types, see this article. For more detail on choosing between the backwards-compatible WM_TOUCH and WM_GESTURE message types, see https://software.intel.com/en-us/articles/touch-samples.

    Adapting the UI to use touch

    Adapting an existing game UI to touch can be complex, and there’s no single formula for how to do it well.

    How we did it: A new touch UI

    The keyboard and mouse UI is familiar. It uses the W, A, S, D keys to move the character. Customizable action keys on the bottom of the screen and shortcut keys 1-9 hold potions, attack skills, and open richer UI elements. These UI elements include inventory, skill tree, task, and map. Right-click selects the character’s weapon and armor or opens a treasure box.

    The touch screen is available at all times, but the touch UI is hidden by default during keyboard and mouse gameplay. A touch button is visible on-screen in this mode.


    Figure 3: Pressing this touch button in this mode brings up the touch UI

    If the player switches the system to tablet mode or touches this button, the full touch UI appears on-screen.

    How we did it: Elements of the touch UI

    In tablet mode, the player usually holds the system with both hands. The UI layout uses both thumbs to minimize any grip changes. Move and attack actions are grouped for easy access by the player’s left and right thumbs.

    First, we designed a wheel control to move the character. The wheel is an overlay on the left side of the screen. This is similar to game controllers, and this familiar use and placement makes it easy to use. The player’s left thumb will usually be in constant contact with the screen. As they slide their thumb around, the character moves on-screen in the direction of the player’s thumb.

    The regular in-game action bar is at the bottom of the screen, but that doesn’t work well for thumb use. We added a group of 4 large action buttons in the bottom right corner where the player’s right thumb can easily reach them. The player can configure these to trigger their most frequently-used actions by dragging attack skills or potions to each button.

    The player must target an enemy before attacking. With the keyboard/mouse interface, left-click targets a single enemy and TAB targets the next enemy within attack range. In touch mode there’s a large button to target the next close enemy. The player can also touch an enemy to target them directly, but that’s not common since it disrupts the player’s grip on the tablet.

    The keyboard and mouse UI uses right-click to open a treasure box, equip a weapon or armor, or drink potions. Tap and hold is the best touch replacement for right-click, so it replaces right-click for the touch UI.

    With the keyboard/mouse UI, there’s a small icon on-screen to open the cascaded windows. This doesn’t work well for touch since the icons are too small. The touch UI includes an icon on-screen to bring up the rest of the UI elements through a cascading set of icons. These icons bring up more complex parts of the UI like the inventory bag, skill tree, tasks, etc. There is also an option to toggle the UI between the keyboard/mouse and the touch overlay. This gives the player an easy way to change between the two UIs.


    Figure 4: Full touch UI with movement wheel, action and target buttons, and the cascading UI displayed

    Here’s the full touch UI with the cascading icons open.

    How we did it: Message handling for the touch UI

    How does the message handling work? It varies for different parts of the UI. Both WM_TOUCH and mouse messages are used. The action, targeting, and cascading UI buttons all use mouse click messages. The movement wheel and main part of the game screen use WM_TOUCH messages.

    Typical gameplay involves continuous touching on the movement wheel control, with repeated use of the enemy selection and skill attack buttons. This means that good multi-touch support is essential. Luckily, WM_TOUCH has good support for multi-touch.

    When there’s a WM_TOUCH message, the game saves some context. It compares this touch with other recent WM_TOUCH messages, checks how long the current sequence of touches has been held, and looks for the location of the touch.

    If the WM_TOUCH message was on or near the movement wheel, the code checks the location of the touch relative to the center of the wheel and the previous touch. If the touch was close to a previous touch and this current gesture started on the wheel, the game moves the character in the desired direction. During development, this required some careful configuration to detect the difference between long continuous touches on the movement wheel and other touches on the main part of the screen.

    If a WM_TOUCH message is on the screen away from the other controls, then it might be part of a gesture like zoom or pan, or it may be part of a tap-and-hold. WM_TOUCH messages are compared with previous ones to decide which action to take. If it’s close enough to the first and has been held for longer than 0.2 seconds, it’s treated as a tap-and-hold. Otherwise, it’s a gesture so the screen is adjusted to match.

    The system also automatically generates mouse messages for all touch messages. Each mouse message includes extra information detailing where it came from. The GetMessageExtraInfo call identifies the difference.

    #define MOUSEEVENTF_FROMTOUCH 0xFF515700
    
    if ((GetMessageExtraInfo() & MOUSEEVENTF_FROMTOUCH) == MOUSEEVENTF_FROMTOUCH) {
      // Click was generated by wisptis / Windows Touch
    }else{
      // Click was generated by the mouse.
    }

    Figure 5: Check if mouse messages came from touch screen

    When a mouse message was generated from the touch screen and the game has already handled the physical touch via WM_TOUCH, the game discards the mouse message.

    If a touch message is on one of the other controls, then it is discarded and the mouse message is used instead.

    With all UI elements in place, the game plays well with a touch screen.

    This article shows another example of adapting a complex UI to touch in Wargame: European Escalation: https://software.intel.com/en-us/articles/wargame-european-escalation-performance-and-touch-case-study

    Your turn: Building your touch UI

    Before you build a touch UI for your game, imagine all the actions a player might take. Then think about how they might be done with touch (or other sensors like the accelerometer). Pay special attention to the differences between tap and click, continuous actions like press-and-hold, and gestures like drag.

    Decide how the player will do all of these actions with touch and where the visible controls should be. For any on-screen controls or cascading menus, ensure they are big enough to use with a fingertip or thumb. Think about how your typical player will hold the system, and design your UI for easy touch access with a typical grip.

    Now that you have the UI planned, use the simplest message for the job. Identify when a touch hits each control. Plan which message type to use for those controls (mouse or touch) and discard duplicate messages.

    For touch messages, save the context of the touch message. Location, control, and timing will all be useful when you need to compose gestures out of multiple touch messages. Think about the parts of your gameplay that require continuous touch contact. Carefully test this during development to make sure that your game works well with typical variations in gestures. Check a variety of gesture directions, touch locations, proximity to previous touches, and touch durations.

    Start the UI in whatever mode matches the system’s current state. Switch the UI between touch and keyboard/mouse whenever the system state changes. Finally, remember to include a manual way to force the UI change in case the system isn’t configured to notify you properly.

    For more tips on designing your touch UI, see the Ultrabook and tablet Windows touch developer guide.

    Sensors

    Ultrabook and 2 in 1 systems include sensors like gyroscope, accelerometer, GPS, etc. It’s possible to enhance the gameplay experience with them.

    How we did it: Shake for a special action

    Legend of Xuan Yuan uses the accelerometer to detect when the player shakes the system. The player accumulates energy during gameplay, then releases it during a super kill attack. The player can shake the system to trigger the super kill, which attacks nearby enemies for 10-20 seconds.

    We tested some different shake actions to measure typical values from the accelerometer:


    Figure 6: Four shake actions, showing intensity and duration in 3 dimensions

    Any acceleration over 1.6 on one axis is a shake. We could also use the sum of the absolute values of acceleration on each axis.

    Because these are real-world events, the data will be noisy and different each time. The values include both long and short shakes. While most of our test shakes gave a single peak value, one of them had several near-peak values. This game uses any shake over 1.6 in any direction on any axis. Multiple shakes within 1.5 seconds are grouped together as one.

    With this code in the game, any shake action will unleash a super kill action.

    Your turn: Use the system’s sensors

    Ultrabook and 2 in 1 systems contain a number of sensors. Be creative, and think of ways you might use each of them to enhance your gameplay.

    Whichever sensor(s) you use, calibrate them to see how they react in real-world conditions. Consider the different typical conditions your players will encounter.

    Summary

    We’ve shown how to adapt an existing game to detect the state (laptop or tablet) of a 2 in 1 system. We also demonstrated how the UI can support touch and how to switch between UIs based on the 2 in 1 system state. Along with the accelerometer to trigger a unique action in the game, these give a compelling game experience.

    Tencent took a risk by introducing the first Chinese MMORPG to support touch gameplay. The risk has paid off! Legend of Xuan Yuan plays great on laptops, tablets, and 2 in 1 systems. We hope you have similar success with your game!

    Authors

    Mack Han is a game client software engineer for Tencent with 10 years of game development experience. He has built games for console, PC, and mobile. He has been working with a big 3D MMORPG for years, specializing in rendering and optimizing.

    Cage Lu is an Application Engineer at Intel. He has been working with big gaming ISVs in China for several years to help them optimize game client performance and user experience on Intel® platforms.

    Paul Lindberg is a Senior Software Engineer in Developer Relations at Intel. He helps game developers all over the world to ship kick-ass games and other apps that shine on Intel platforms.

    References

    Detecting tablet and laptop mode and screen orientation in a 2 in 1 system: http://software.intel.com/en-us/articles/detecting-slateclamshell-mode-screen-orientation-in-convertible-pc

    More details about adapting to 2 in 1 systems:
    http://software.intel.com/en-us/articles/how-to-write-a-2 in 1aware-application

    Comparing the various Windows 8 touch message types:
    http://software.intel.com/en-us/articles/comparing-touch-coding-techniques-windows-8-desktop-touch-sample

    Case study showing touch implementation for Wargame: European Escalation:
    http://software.intel.com/en-us/articles/wargame-european-escalation-performance-and-touch-case-study

    Touch developer guide:
    http://software.intel.com/en-us/articles/ultrabook-device-and-tablet-windows-touch-developer-guide

    A more complex touch use case:
    http://software.intel.com/en-us/articles/hot-shots-warps-conventional-touch-gaming

    License

    Intel sample source is provided under the Intel Sample Source License Agreement.  Portions of this document are subject to the Microsoft Limited Public License.

     

    Intel®Developer Zone offers tools and how-to information for cross-platform app development, platform and technology information, code samples, and peer expertise to help developers innovate and succeed.  Join our communities for the Internet of Things, Android*, Intel® RealSense™ Technology and Windows* to download tools, access dev kits, share ideas with like-minded developers, and participate in hackathons, contests, roadshows, and local events.

     

    Intel, the Intel logo, and Ultrabook are trademarks of Intel Corporation in the U.S. and/or other countries.
    Copyright © 2014 Intel Corporation. All rights reserved.
    *Other names and brands may be claimed as the property of others.

     

  • ultrabook
  • touch
  • 2 in 1
  • Разработчики
  • Microsoft Windows* 8
  • Windows*
  • C/C++
  • Начинающий
  • Разработка игр
  • Датчики
  • Сенсорные интерфейсы
  • Опыт пользователя и дизайн
  • Ноутбук
  • Планшетный ПК
  • URL
  • Тема зоны: 

    IDZone

    Developing an Educational App for Chromebooks*

    $
    0
    0

    Download Zipfile

    Authors: Dave Bach and Priya Vaidya

    Summary

    Google Chromebooks* are relatively new to the computing scene, but already they are becoming increasingly popular within the education space. Schools and instructors favor Chromebooks because of their competitive pricing and undeniable reliability; whereas, students favor them for their ease of use and trouble-free operation. A common misconception is that Chromebooks have to be connected to the Internet in order to function. This misconception is well-founded because most developers don’t utilize the ability of apps to work offline. If developed correctly, Chromebooks are as flexible as any other PC platform. This case study outlines the lessons learned in developing an educational flashcard client and server that utilize some of the unique features of Chromebooks and the Chrome* web browser.

    It is important to note that apps developed for Chromebooks are fully compatible with other operating systems that support the Google Chrome browser. The fact that an app will behave the same across platforms makes developing for Chromebooks an ideal one-size-fits-all solution.

    Introduction

    Chromebooks are built around the Chrome OS, which is a very lightweight flavor of Linux*. The Linux heritage is very obvious when the user boots into developer mode, where the familiar Linux shell presents itself. For most users, however, the centerpiece of the operating system is the preinstalled Chrome web browser. The Chrome web browser on the Chrome OS can do everything that the Chrome browser can do on a Windows*, MAC*, or Linux* platform. This might sound very limited in terms of flexibility, but the Chrome web browser, in its latest rendition, offers APIs that can make Chrome applications feel and behave very similar to native apps.

    Developers have two approaches for Chrome OS development either build a hosted app or a packaged app:

    • Packaged apps are, by far, the best way to develop an application for the Chromebook. Packaged apps work either online or offline and are executed on the host’s local hard drive. The user can access the app simply by launching it from the app launcher. Also, using sockets and http requests, packaged apps can optionally establish connections and communicate with remote servers to update their content. Packaged applications are very similar to native apps on Windows and Mac OS*.  
    • Hosted apps are very similar to web apps, the only difference is the inclusion of some metadata. The files for a hosted app reside on a remote server, which requires users to have Internet connection to access and use the app. Hosted apps are always opened from the web browser window versus their own window. The majority of apps on the Chrome Web Store are hosted apps or relinks to web apps.

    Many hosted apps on the web store, such as calculator apps, timer apps, note taking, or other apps that don’t require an Internet connection to function, would greatly benefit  if they were converted to packaged apps. The following flowchart can help you choose the right type of app. Notice that a web app can easily be made into a hosted app, and a packaged app requires more work and planning from the developers’ standpoint.

    When building both hosted and packaged apps, the first step is to create a directory for the app. After the directory is created, the next step is to create a JSON file, manifest.json file, for the app’s metadata. The field entitled “launch” in the manifest file determines whether the app is hosted or packaged. If the “launch” field has a subfield titled, “web_url”, the app is a hosted app and its content can be found at the specified URL. Conversely, if the “launch” field has a subfield titled either “script” or “local-path,” it’s a packaged app. The difference between “local-path” and “script” is that the former opens the app inside the Chrome browser, while the latter runs the application in the background or opens up a dedicated window for the application to run in. For both “local-path” and “script,” the value must point to a local file. Please refer to Section 1 in the Appendix for examples of manifest files.

    Hosted apps and packaged apps are further differentiated in the packaging step, at which point the app directory is zipped and uploaded onto Google. A hosted app should only contain the manifest file and the launcher icon, whereas a packaged app must contain the manifest file, the launch icon, and the CSS, JavaScript*, and HTML5 files. In practice, a packaged application must be self-sufficient and all of its files must be stored locally in order to run offline. A packaged app cannot rely on scripts or style sheets hosted on a remote server. Other than the manifest and the icon, a hosted app has its JavaScript, CSS, and HTML on a remote server.

    This case study shows how to make a packaged app specifically for Chromebooks and showcases the details for developing the packaged client and a remote server. The app’s theme is educational.

    Content Design

    In order to decide what our app would be, we talked with consultants in the educational sector. The consultants identified that the educational market prefers apps that foster student-to-student learning. They recommended focusing around the idea of a student note/content sharing app.

    Combining the technology available on Chromebooks and the recommendation from consultants , we initially decided to make a screen-sharing app that would allow students to share their screens and interact with each other in real time. Screen sharing isn’t new; there are plenty of web sites and online services. What we wanted was a Local Area Network, direct peer-to-peer, screen-sharing app. Interestingly, the Chrome browser has an API for capturing screen events; all we would have to do is send the data over the LAN using a TCP/IP protocol. Unfortunately, we found out that direct P2P communication on Chrome isn’t possible. Safety features and IP table rules prevent Chromebooks from connecting to each other directly. There are ways around this, but they require root privileges and booting into developer mode.

    Because P2P connections are not supported, we decided to drop the screen casting concept and instead develop another form of content sharing. The idea was to provide students with a way to share notes and educational materials with each other. Combining this with other technologies available on Chromebooks today, such as the ability to optionally work online or offline, we set out to develop an application that has yet to be done on Chromebooks/Chrome browsers.

    Working offline in the educational space is very important. Although a school might have Internet, it may not work outside of the classrooms. Internet is common in countries like the U.S., but many other places in the world do not have it. An educational app that works offline ensures that no student is at a disadvantage and gives everyone an opportunity to learn regardless of where they are, outside under a tree or in a classroom.

    User Interface Design

    After trying out a few successful educational apps for the Android* and iOS* markets, we finalized what we wanted our application to be. From a user standpoint, an application that gets used is one that is intuitive and responsive. So we decided to create an application where users can create and share flashcards, a concept familiar to everyone. Essentially, users create their own flashcards by entering questions and answers into a template the app provides. The following screenshots show the UI of our application:

    To use the application, the user must first log in.

    If the user doesn’t have an account, one can be created by clicking the register button. The account info is stored on a local database.

    Once logged in, users will be see a list of all the card sets they either made or downloaded from an online repository.

    They can then click on any set to proceed to the game screen. This particular card set has five questions

    As the user goes through the questions, the application keeps track of the number of right and wrong answers.

    When all the questions are answered, a completion screen displays. It has two options: restart and new level.

    When new level is clicked, a selection page displays. On this page, users can also choose to make a new set of cards.

    If a user clicks the “make your own” button, the application will ask for the name and description of the new set.

    By default, each set has one question, but by clicking add question, additional questions can be added. When they are finished, they can click finish.

    The level select page will then display. Notice that a new level is added.

    Similarly, users can also add a level to their level select page by importing sets from an online repository. All they have to do is go into the catalog page and enter a search key. Using regular expressions, the server will then return relevant packages. Once a list of packages is returned, users can then choose which package to add onto their account.

    Architecture

    The system includes a client and a socket server. Communication will be done via socket communication:

    Technologies Used

    • Client
      • Single HTML page– The client is built on top of a single HTML page, to give the application a more native app experience. Using multiple interlinked HTML pages is also a possibility, but doing so will result in a noticeable transition from one frame to another, much like transitioning between pages within the browser.
      • CSS3 animations– The decision to use CSS3 animations instead of the traditional JQuery animation is speed. CSS3 animations are faster and are natively supported by the Chrome browser. Refer to this link for comparison:
      • JQuery– JQuery seems to be the JavaScript package of choice for Google-developed Chrome sample apps. Instead of JQuery, developers could use another JavaScript library that doesn’t require in-line scripting. Be aware that some packages such as Angular JS do not work due to its inline nature.
      • JavaScript– is the preferred scripting language because of its native support and speed. JQuery and JavaScript have some overlapping features. Whenever possible, standard JavaScript was used instead of JQuery. Refer to this link for comparison:
    • Server
      • Node.js*– There is no set standard for what language can be used on the server side—it could have been PHP, Ruby*, or Python*. Node.js was chosen because it is a relatively new language that is gaining traction in the web development space. Compared to  older languages, such as PHP, it is much easier to use. More information on node.js can be found at:
      • WS– This package is a Node.js websocket implementation that enables socket communication between the client side and the socket server. WS was chosen over the more popular socket.io extension because it does not require the client to download any additional script package from the server; whereas, socket.io requires the client to download a script hosted on the server.
      • MongoDB– This noSQL database is a very popular combination to use with Node.js. However, a SQL database will also work with Node.js. Refer to the link below for more info.

    Database Structure

    The application utilizes two types of databases: client-side and server-side. When an Internet connection is available, the two databases work with each other.

    • Client-side database (indexedDB) stores:
      • User names
      • Passcodes
      • Multiple question sets
        • each set has an array of questions and answers
    • Server-side database stores:
      • Multiple user uploaded question sets
        • each set has an array of questions and answers
      • Accounts
        • usernames
        • passwords
        • the set ID for each user

    Packaged App Standards

    • Required files
      • Manifest.js – used for granting permissions and referencing files
      • HTML page– the program view, what the user sees
      • Icon – an icon used for launch
      • JS pages
      • CSS pages
      • Please see: https://developer.chrome.com/apps/first_app
    • Security
      • No inline scripts– All scripts are stored in external files.
      • Self-contained– Everything that is needed for the packaged app to run has to be provided locally. No remote scripts can be referenced.
      • Supported APIs – A few APIs are not supported in a packagedApp for security reasons.

    Online vs. Offline Users

    • Similarities
      • both can create accounts
      • both have password-protected privacy
      • both can make flashcards
      • both can review each specific set of flashcards
      • both benefit from automatic grading
    • Differences
      • offline users cannot share their flash cards
      • offline players cannot search and import flashcards from the online database

    Organization and Presentation

    Our application is organized around the underlying structure of the database. It is, in essence, a graphical interface with functionalities built on the database.  With this application, users are performing database transactions. They allocate a portion of the database for themselves by registering an account,  add content to their accounts by adding card sets. They can also add questions to those card sets.

    This application extends the database in that it provides users with feedback. For instance, when users are reviewing their notes, the app checks their answers against the correct one on file. Furthermore, it provides a level of social interaction. Users can share content by copying their card sets from local storage to global storage. Once content is in global storage, other users can download it.

    Outcomes

    As beginners in web development, learning to develop for Chrome OS was relatively easy. Unlike full-on web development, we did not have to account for compatibility issues between the different browsers such as Firefox*, Internet Explorer*, Safari*, Opera*, and Chrome. With Chrome applications, developers only have to ensure the application works on the Chrome browser, so no extraneous source code must be added to correctly render across browsers. As of this writing, our flashcard application has been tested on:

    • 64-bit Windows 8 Enterprise laptop
    • 32-bit Windows 8 Professional tablet
    • 32-bit Ubuntu* Desktop
    • 32-bit ARM* processor-based Chromebook
    • 32-bit Intel® processor-based Chromebook

    Across all platforms the app behaves the same.

    Possible Future Upgrades

    This app would really benefit with the ability to upload pictures onto the flashcards. This would provide a new way of taking notes. Chromebooks already have the ability to take pictures using the built-in camera, so it is only a natural step to integrate pictures into our app.

    Another upgrade that would make this app accessible to everyone is the ability to import and export card sets to and from a local storage device via a USB connection. The rationale for this is that not everyone is connected to the Internet. A large part of our app is about sharing notes and ideas. Users who are not connected to the Internet can’t share their creations with friends or family.

    For those who are connected to the Internet, their ability to share shouldn’t be limited to the people around them. A possible upgrade that would extend the social reach of this app is the ability to share card sets on social networks such as Facebook or Twitter. This upgrade would make it possible for users to make flashcards, post their scores, and challenge friends to beat their scores on the web.

    Appendix

    Section 1: Sample Manifest

    Sample Hosted app manifest

        {"name": "Sprinkle Tinkler","version":"1.2","manifest_version": 2,"minimum_chrome_version": "23","app": {"urls": ["*://www.sprinkletinkler.com"
       ],"launch": {"web_url": "http://sprinkletinkler.com"
       }
    },"icons": {"16" : "icon-128x128.png","128": "icon-128x128.png"
    },"permissions": ["unlimitedStorage","notifications"
       ]
    }

    Sample Packaged app manifest (web browser)

    
     {
       "name":"Flash Mania","description": "An educational exploration.","version":"1.2","manifest_version": 3,"minimum_chrome_version": "23","app": {"launch": {"local_path": "index.html"
          }
    
       },
    
       "icons": {"16" : "assets/icon-128x128.png","128" : "assets/icon-128x128.png"
                 },"permissions": ["<all_urls>","storage","fileSystem",
        {"socket":["tcp-connect:*:*"]}
      ]
    }

    Sample Packaged app

    
    	   {
    	   "name":"Flash Mania","description": "An educational exploration.","version":"1.2","manifest_version": 3,"minimum_chrome_version": "23","app":{"background": {"scripts":["main.js"]
    	    }
    	  },"icons": {"16" : "assets/icon-128x128.png","128" : "assets/icon-128x128.png"
    	   },"permissions": ["<all_urls>","storage","fileSystem",
    	   {"socket":["tcp-connect:*:*"]}
    	   ]
    	   }

    Section 2: Loading packaged app onto Chrome for testing

    1. Store app directory on a reachable location
    2. Open the Google Chrome browser
    3. Click the menu button
    4. Select Tools -> Extensions
    5. Select developer mode
    6. Click on the load unpacked extension
    7. Select the application folder
    8. Your app should now be loaded onto the Chrome browser. Go to upper right hand corner and click “app”
    9. Click the app you just uploaded to launch it.

    Section 3: Useful links

    Section 4: Licenses

    Credits

    Node.js - http://nodejs.org/ 
    WS for Node.js - https://github.com/einaros/ws 
    JQuery - http://jquery.com/ 
    JQuery-UI - http://jqueryui.com/ 
    CSS/HTML/JavaScript

  • Chrome OS
  • Chrome app development
  • Разработчики
  • Google Chrome OS*
  • JavaScript*
  • Начинающий
  • Опыт пользователя и дизайн
  • Планшетный ПК
  • URL
  • Тема зоны: 

    IDZone

    Tenha o controle de seu app na palma das mãos

    $
    0
    0

    Já imaginou um jogo que você consegue empinar uma pipa usando suas mãos? E se esse jogo permitisse você a manobrar a pipa usando gestos que você já conhece como enrolar ou soltar a linha? Pode até parecer mentira, mas o pessoal da Animagames já teve essa idéia no ano passado:

    Mas aí você como desenvolvedor faz a grande pergunta: que bruxaria é essa? Pois é, foi a mesma pergunta feita por diversos desenvolvedores brasileiros no ano passado quando apresentamos o Perceptual Computing SDK. Rebatizado para RealSense SDK em 2014, ele permite aos desenvolvedores enriquecer a maneira que suas apps interagem com seus usuários e adiciona "sentidos" a dispositivos como notebooks e tablets. 

    E o que o SDK permite você fazer?

    • Reconhecimento de mãos e dedos: Posição em 3D das mãos e dedos simultaneamente e em tempo real
    • Rastreamento e análise de faces: além de rastrear múltiplas faces, ele identifica diversos pontos de interesse como olhos, nariz e boca
    • Subtração de fundo: efeito de chroma-key em tempo real sem carregar a tela verde para todos os cantos
    • Realidade Aumentada: faça elementos virtuais interagirem com o ambiente do mundo real
    • Reconhecimento de voz

    Consegue imaginar como implementar todos esses recursos no seu app? Não? Pois essa é uma das propostas do SDK, permitir a utilização desses recursos de maneira mais simples e que deixe o desenvolvedor focar no que mais importa: o conteúdo do app. Quer saber mais sobre a tecnologia? Clique aqui.

    Mas como os desenvolvedores da Animagames conseguiram ter acesso à tecnologia? 

    No ano passado a Intel Corp. promoveu o concurso "Perceptual Computing Challenge 2013", onde desenvolvedores do mundo todo puderam submeter suas idéias de apps e as melhores foram selecionados para fase de implementação, que premiou os melhores apps. Como o Brasil ficou de fora do concurso mundial, o time da Intel Software do Brasil fez uma versão local do concurso e chamou os desenvolvedores nacionais para o "Perceptual Challenge Brasil" aonde a comunidade e empresas locais puderam mostrar que também podemos produzir muitos apps de alta qualidade que vão desde a implementação uma mão robótica até mesmo uma maneira inovadora de medir a flexibilidade de uma pessoa! Clique aqui para conhecer os outros vencedores do concurso "Perceptual Challenge Brasil".

    Com apps incríveis e ótimos resultados do concurso Brasileiro, a Intel Corp. deciciu incluir o Brasil como país elegível a participar da edição 2014 do concurso internacional, o "RealSense App Challenge 2014”. Outra coisa muito bacana é que os vencedores do concurso brasileiro de 2013 foram convidados a participar do concurso internacional como “Embaixadores”, competindo diretamente com os vencedores do concurso internacional do ano passado! 
    Assim como no ano passado, o concurso é dividido em duas fases: submissão de idéias e implementação do projeto. A fase de submissão de idéias começou no dia 28 de Julho e irá finalizar no dia 10 de Outubro. Se você tem uma idéia fantástica ou um app que usa os recursos presentes no RealSense SDK, não perca tempo e mostre para o mundo aonde sua criatividade pode chegar!
    Essa é uma ótima oportunidade para estudantes, pesquisadores, empresas, desenvolvedores e designers que trabalham com jogos, Interfaces Naturais (NUI), computação gráfica, realidade virtual ou realidade aumentada.

    Clique aqui para para saber mais sobre regras, premiações e como fazer parte dessa oportunidade incrível para a comunidade brasileira.

  • Intel RealSense
  • Изображение значка: 

  • Инструменты для разработки
  • Разработка игр
  • Графика
  • Опыт пользователя и дизайн
  • Комплект разработки Intel® для перцептивных вычислений
  • Технология Intel® RealSense™
  • «Перцепционные» вычисления
  • .NET*
  • C#
  • C/C++
  • HTML5
  • Java*
  • JavaScript*
  • Unity
  • Технология Intel® RealSense™
  • Windows*
  • Ноутбук
  • Планшетный ПК
  • Настольный ПК
  • Разработчики
  • Партнеры
  • Профессорский состав
  • Студенты
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Тема зоны: 

    IDZone

    Включить в RSS: 

    1

    How to get MDK?

    $
    0
    0

    Newbie just getting started.

    I'm trying to get the MDK from the FAQ on this page: <https://software.intel.com/en-us/intel-mobile-development-kit-for-android>

    Step 4 says "Download Intel® System Studio.  This toolset helps you analyze your Android apps from Java to assembly code and CPU States."

    That link gets me to an error page saying to contact support.

    Step 5 tells me to come to this forum.

    FWIW, I did do steps 1 through 3 - and I'm anxious to get started!

    Query about running PIN on Intel(R) atom Z2580

    $
    0
    0

     

    can I run pin (./pin --version) on an Android device which is based on Intel® Atom™ processor Z2580 (up to 2.0GHz Dual-Core). This is a 32bits processor.

     

    HAXM memory setting not updated

    $
    0
    0

    I got an intel processor supporting virtualization technology and is already enabled in BIOS.

    I run the HAXM and set memory limit manually and installed successfully.

    But the saved memory limit is always the default value 1024 MB.

    I run the set up again and change it to even lower limit but it is not saved either.

    Anyone got the same issue before??

    Thank you!

    how to create a social media app like facebook,google+...etc

    $
    0
    0

    how to host a server side for update app

    Viewing all 360 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>