The introduction of the iPad and other slates demonstrates that end users have a strong appetite for touch-enabled experiences. MFC 10 adds comprehensive support for touch functionality, allowing users with tablet PCs and digitizers to interact with applications in a simple, natural manner, including gestures and multi-touch.
When considering if and how to add touch support to an MFC application, the first point worth assessing is how well the application will perform in touch-scenarios without explicitly adding any support at the application level. The Windows team has gone to great efforts to make applications work well even if they have no code present to deal with touch input. In these situations, Windows will primarily use touch in much the same mode as a mouse, with screen taps equating to mouse clicks, and an on-screen keyboard allowing users to enter text into edit controls. The exact behavior of touch input can be customized via the Windows Control Panel. Figure 1 shows a simple MFC Dialog-based application that has an Edit Control to allow text input. This application has had no explicit touch support added. When this application is run on hardware with digitizer support and the digitizer is used to set the focus to the Edit Control, an on screen keyboard icon is displayed.
Figure 1. Out-of-the-box touch experience with MFC and Touch
Clicking on the keyboard icon brings up the Touch Keyboard shown at the top of Figure 2. The text input panel on Windows 7 has two modes that can be switched between using the button at the top-left of the panel. The Touch Keyboard offers standard key-based text input, while the Writing Pad (shown at the bottom of Figure 2) has a much richer experience, offering the option to write in either character-by-character mode or using freehand style. Character-by-character has a blank cell where individual letters are entered using a pen, while freehand style offers a more natural paper-like experience of digital inking, and hand-writing recognition algorithms are used to form words and sentences. Windows 7 also supports the ability to personalize handwriting recognition at an operating system level.
Figure 2. Text Input Options
Once some text has been entered in either the Writing Pad or Touch Keyboard, an Insert button allows the text to be moved into the Edit Control, as shown in Figure 3.
Figure 3. Writing Pad with text ready for insertion
If an MFC application is designed with a clean, simple interface, the inbuilt operating system support for touch- and pen-based input may be sufficient. After covering the two other options for touch integration below, we'll briefly return to this first choice and see how it can be improved on.
For some MFC applications, providing direct support for touch may lead to a better user experience. There are two ways that an application can support touch, indirectly by allowing Windows to translate the touch input into gestures (such as zoom, pan or rotate), and directly by receiving the low-level touch events and executing code that responds to them.
Responding to gestures in an MFC application is simple - Windows 7 and Windows Server 2008 R2 both support the
WM_GESTURE Windows message. MFC translates the various panning, zooming and rotating gestures that are all encapsulated by
WM_GESTURE into distinct CWnd virtual methods like
OnGestureRotate that take care of the parsing of the information in the
WM_GESTURE parameters into specific information relevant to the gesture.
The MFC Gesture CWnd virtual functions are:
OnGestureZoom(CPoint ptCenter, long lDelta)
OnGesturePan(CPoint ptFrom, CPoint ptTo)
OnGestureRotate(CPoint ptCenter, double dblAngle)
OnGesturePressAndTap(CPoint ptPress, long lDelta)
Each function returns a BOOL that indicates whether the event has been handled. The MFC Class Wizard that ships with Visual C++ 2010 RTM does not currently have support for either the
WM_GESTURE message or the
OnGestureXXX virtual methods. However, adding the handlers is a simple exercise:
//in View header file
class CMyView : public CView
virtual BOOL OnGestureZoom(CPoint ptCenter, long lDelta);
//in View Source File
BOOL CMyView::OnGestureZoom(CPoint ptCenter, long lDelta)
//code for zooming lDelta here
This article was originally published on Friday Aug 27th 2010
For certain types of applications like drawing and CAD programs, the pre-processing of touch input into gestures will not be required, and the application needs to receive the raw input to determine if the touch is a gesture or a drawing command for the application. To achieve this, some preliminary work is required for the application. The first action, which is not mandatory but generally sensible to complete, is to determine what touch support in available on the hardware that the application is running on. To achieve this, a call to the Windows SDK function
GetSystemMetrics is required, passing
SM_DIGITIZER as the parameter. This function returns a bitflag, with the documented values from MSDN being:
Name Value Description
TABLET_CONFIG_NONE 0x00000000 The input digitizer does not have touch capabilities.
NID_INTEGRATED_TOUCH 0x00000001 An integrated touch digitizer is used for input.
NID_EXTERNAL_TOUCH 0x00000002 An external touch digitizer is used for input.
NID_INTEGRATED_PEN 0x00000004 An integrated pen digitizer is used for input.
NID_EXTERNAL_PEN 0x00000008 An external pen digitizer is used for input.
NID_MULTI_INPUT 0x00000040 An input digitizer with support for multiple inputs is used for input.
NID_READY 0x00000080 The input digitizer is ready for input.
The hardware support available can be determined by applying the bitwise & operator using these flags and the value returned from
GetSystemMetrics(SM_DIGITIZER). Windows 7 supports multiple simultaneous touch point inputs, and calling
GetSystemMetrics with a parameter of
SM_MAXIMUMTOUCHES will return the number of touch points available.
After an application has determined that touch support is available, the next step is to call
CWnd::RegisterTouchWindow on each CWnd-derived class for which direct touch input will be processed. Calling
RegisterTouchWindow will mean that a window will no longer receive the higher-level gesture messages. Touch registration can be done any time after a window is created, so if a window will always be touch-aware, the
WM_CREATE message can be handled (the Visual C++ Class Wizard can be used to add a handler for this method), and
RegisterTouchWindow can be called as shown below.
int CMyView::OnCreate(LPCREATESTRUCT lpCreateStruct)
if (CView::OnCreate(lpCreateStruct) == -1)
As with gestures, touch messages are not currently supported by the Class Wizard, and the MFC virtual overrides to handle touch messages that need to be added manually:
//view header file
virtual BOOL OnTouchInput(CPoint pt, int nInputNumber, int nInputsCount, PTOUCHINPUT pInput);
//view source file
BOOL CMyView::OnTouchInput(CPoint pt, int nInputNumber,
int nInputsCount, PTOUCHINPUT pInput)
//handle touch message here
The first strategy for touch integration that was covered at the start of the article mentioned that simply by adopting a simple, clean interface, the built-in operating support for touch and pen input via the Onscreen Keyboard and Writing Pad may be sufficient. One enhancement to this solution worth considering is spreading out the controls on the user interface if an application detects that touch interaction is active. This compensates for the less precise nature of touch input compared to using a mouse.
To implement this enhancement, the main window of the application would register to receive touch input messages, but the
OnTouchInput method would return FALSE to indicate the application does not want to handle the message, allowing normal tap processing to occur. Before returning FALSE, a flag can be set to indicate that touch-based interaction is occurring and the UI can be adjusted accordingly.
Windows 7 and MFC 10 provide complete and rich support for providing users with applications that "Think In Ink". Apart from the built-in operating support that allows applications without any explicit touch and ink support to receive input via the Onscreen Keyboard and Writing Pad, there are two mutually-exclusive modes of touch support. The default touch input support mode is gestures, which is a Windows-provided mapping of distinctive touch sequences into gestures like zoom and pan. MFC further translates these gestures into a simplified set of CWnd virtual methods that can be overridden as required. The most comprehensive mode of touch support is registering to receive the low-level touch messages which may be coming from multiple touch points simultaneously, and responding to these touch events in the message handler. Regardless of the touch integration strategy chosen for an application, MFC provides a rich and complete framework for implementing it.