Escaping the design culture of average

Escaping the design culture of average

“Designers will have to give away control over the user interface to a component at runtime, to which they don‘t know what it will do.” Gottfried Zimmermann, in an interview for this thesis; #00:30:46-0# to #00:30:55-0#

How to decide for adaptivity

We need “a better understanding of how and when intelligence can substantially improve the interaction […]” (Höök 2000).

This leads to the question: “How to decide for and on adaptivity?” As illustrated before, there are two sides in any situation in HCI: the user and the system. Dieterich et al., (1993) & Gullà et al. (2015) proposes that there are five fundamental questions that have to be answered when deciding for adaptivity.

  • Who should adapt? Which side (user or system) has to take the main role in the adaptation process and what should be the role of User Interface in the adaptation process?
  • What to adapt? Which level of interaction should be adapted? Which layer of the system has to adapt: the presentation, task or the infrastructure? What are the adaptation variables?
  • Which goals should be promoted by the selected level of adaptation, based on which context? What is the aim of the application, can it be achieved with adaptation techniques and with available data and resources (context of use)?
  • What kinds of data (information) should be mainly considered during the adaptation process?
  • What are proper rules to manage adaptation? When should the changes be made? What are the triggers for adaptation?

Who should adapt?

When we try to answer the question who should adapt, we need to understand that the answer is not a definite one, but is located somewhere on a scope between one side and the other.

According to several researchers (Lawie & Mayer 2010, Kobus et al. 2001, Dietrich et al. 1993, Kuhne 1993 and Browne et al. 1990) Adaptive UIs can employ different levels of adaptivity. Level of adaptivity is the scope between the user in control and the system in control of adaptation or whether adaptation is a cooperative process between the user and the system (Lavie & Mayer 2010). These systems called mixed initiative interactions by several researchers (Schiaffino & Amandi 2004), they lay on the scope between two extrem points suggested by Dietrich et al. (1993). The authors suggested that there are four levels of action in and adaptation process in any situation in HCI. They named that either one side is in fully control over the adaptation. This defines the extremes of each side of the scope. The scope therefore includes four levels of adaptivity.

Figure 31: Examples of an adaptable systems configuration scheme (a) and an adaptive system configuration scheme (b) (Dieterich et al., 1993).
Figure 31: Examples of an adaptable systems configuration scheme (a) and an adaptive system configuration scheme (b) (Dieterich et al., 1993).

Lavie and Mayer (2010) intended that there are two stages, called “intermediate level of adaptivity”, on the scope between the two ends Dietrich et al. (1993) introduced (see Figure 31). This recommendation was originally covered by Browne et al. in 1990. Browne, Totterdell and Norman (1990) proposed four levels of adaptation, combined with the two ends Dietrich et al. proposed and the two intermediate levels of adaptivity proposed by Lavie and Mayer (2010) those four stages are:

  • Manual: ‚Simple’ systems use a ‘hard wired’ stimulus-response mechanism, without adaptation — the user has to perform all actions.
  • User selection: ‘Self-Regulating’ systems monitor the effects of the adaptation on the subsequent interaction – the user is presented with an set of proposals, the user has to decide to go with an recommendation or do the action himself.
  • User approval: Self-mediating‘ systems monitor the effect of an system initiated action – the user is asked to approve the system choice before initiated or either choose another alternative.
  • Fully adaptive: ‘Self-modifying‘ systems are capable to initiated action without the users approval – the user is not presented with another alternative but could undo the adaptation at any stage.

Isbell & Pierce (2005) recommended a fifth stage in their Interface-Proactivity Continuum. Their approach is context focused and provide an estimation of the cost of failure. That help to get a understanding for which level of adaptivity to choose. With the continuum they introduced a shared vocabulary for discussing and comparing adaptive interfaces, allowing us to have a more nuanced conversation. They use a simple example of an alarm clock to illustrate the different stages of adaptation.

Figure 32: Interface-Proactivity Continuum adapted from Isbell & Pierce 2005
Figure 32: Interface-Proactivity Continuum adapted from Isbell & Pierce 2005

Gottfried Zimmermann, Professor at the HdM Stuttgart, mentioned in an interview for this thesis, that “it would be nice […] if we can get towards automatic adaptivity more and more so not just adaptable” (#00:16:24-0# #00:16:32-0# Gottfried Zimmermann). He further mentioned that there are some challenges along the way like the users ability to keep up with the technological development and the mindset of the designer, giving up control over the final result. He recommends a mid way between automatic adaptivity and adaptability, what he believe is the next step to go. “The middle way would be that the designers themselves define situations define multiple user interfaces and the conditions under which they would appear and be activated” (#00:44:47-0# -#00:45:01-0# Gottfried Zimmermann). Therefore the designers would give up partial control and still can rely on the way they work and preparing the interface parameterization at design time.

What to adapt?

To answer this question we have to divide the interface into layers. I will describe which levels of interface adoption can be chosen from, based on Bonsiepe‘s (1998) theory about interface as major domain of design, Norman‘s (2002) conceptual models of design and the recommendation of a three layer devision of user experience from Jenson (2012).

Understanding Interface

According to Bonsiepe (1998), an interface is a relationship between the user, the action and the tool. Bonsiepe describes the interface as a bridge that forms the tool, so it can be used by a user to fulfill an action efficiently. Unless there is an interface, there is no tool, just components. Therefore the interface is what design lays its focus on. Bonsiepe uses a scissor as example to explain the interface. A scissor is made out of two blades and two handles. Only the action to use the handles to cut something makes it a scissor. Therefore the interface is what constructs the tool, the scissor. If the user uses the handles and blades in a different way, e.g., for the action: open a beer bottle, the interface makes the scissor a beer opener. Therefore design has to put its effort on the interface, so that a user understands which actions he can fulfill.

While a scissor mainly consists of hardware parts, a digital product consists two parts, the software (digital) and the hardware (physical). A digital product furthermore isn’t one single tool, in fact it is often thousands of tools in one. The intersection of these two parts is what makes a digital product a platform that can provide more than one tool via the interface.

Figure 33: The ontology chart of design (Bonsiepe 1998)
Figure 33: The ontology chart of design (Bonsiepe 1998)

The user can interact with the interface through actions to use tools that a platform provides, because an adaptive platform can also interact with the user through the interface. This is a bidirectional interaction. The access through the interface is what actually constructs the tools of the product. Bonsiepe isolated the tool, action and the user to describe his understanding of the interfaces. As mentioned in chapter 2.1.3 (Context of Use), it is crucial to understand that the component environment heavily influences every other component in a HCI situation. Therefore, the component environment has to be taken into account when talking about interfaces as the intersection of design.

Figure 34: The ontology chart of user interface design; Interface definition adapted from Bonsiepe (1998)
Figure 34: The ontology chart of user interface design; Interface definition adapted from Bonsiepe (1998)

Breaking down User Experience

According to Bonsiepe, the interface is the central domain of design. He describes the interface as what is designed to enable a user to fulfill an action efficiently. Don Norman (2002) describes this similar when he talks about the three conceptual models, the design model, the user’s model and the systems image. The designer has a clear conceptual model (design model) about how a product should work and which actions can be fulfilled by a tool in an efficient way. However, most of the time the designer is not present to explain his conceptual model to the user. The designer can only communicate his model through the system image, which is, according to Bonsiepe (1998), the interface. In the absence of the designer, the user has to rely on every information he can get out of the system image (interface) to use a product in an efficient way (Norman, 2002). As illustrated with the scissor/ beer opener example, the user can interpret an interface in a different way (user’s model ≠ system image) and therefore use the tool to fulfill an action inefficiently. This can result in a bad user experience and this is not what design is aiming at. To understand where design can put on, we need to understand how designers can successfully communicate their design model through the system image (interface) with the user.

Figure 35: The ontology chart of user interface design; Interface definition adapted from Bonsiepe (1998); Conceptual models (design model, user‘s model and system image) adapted from Norman (2002)
Figure 35: The ontology chart of user interface design; Interface definition adapted from Bonsiepe (1998); Conceptual models (design model, user‘s model and system image) adapted from Norman (2002)

As said, the designer tries to design the system image (interface) in a way it successfully communicates the designer‘s conceptual model to the user. Therefore, the designer is building the system image in a way it can be experienced by the user independently in the way the designer intended. The experience aims at synchronizing the design model with the system image and user’s model. Consequently, the design of the interface (system image) is the design of a user experience.

User experience is a broad and fuzzy term, widely used across all disciplines of design. Jenson (2002) wrote in his book “The Simplicity Shift” about how the term “user experience” is too imprecise. He proposes to understand user experience better by breaking it up into three broad levels: the presentation layer, the task layer, and the infrastructure layer. With his definition in mind, we can have a more nuanced conversation about user experience.

  • The presentation layer: the visual/graphical appearence. How does the product appears in the real world? What is the choice of color? Which materials are used? What is the visual design language?
  • The task layer: the actions and user models. How does the application operates and flows? What are the core concepts of the system image the user must understand in order to use the product efficient?
  • The infrastructure layer: the technology the product is built upon. How does the product make use of the hardware and software? What are the input modalities? Can it understand speech input?

The 3-layer model can be applied to the ontology chart of user interface design. A digital product becomes part of the environment by the presentation of the product. Therefore the presentation layer is the layer which covers the experience of the user interaction with the interface in an environment. The interface is also part of the task layer of the experience because the task layer uses the interface as a bridge between the concrete real world (presentation layer) and the abstract platform (infrastructure layer). On this layer a user can fullfill an action in an environment through the interface. This enables the user to use a tool (product) provided by the platform and interface. The infrastructure layer covers mostly the platform level of the ontology chart and builds the base technology, the different components of the product. Therefore every user experience of a digital product is defined by three layers that influence each other bottom up.

Figure 36: The ontology chart of user interface design; Interface definition adapted from Bonsiepe (1998); Conceptual models (design model, user‘s model and system image) adapted from Norman (2002); 3-layers of user experience adapted from Jenson (2002)
Figure 36: The ontology chart of user interface design; Interface definition adapted from Bonsiepe (1998); Conceptual models (design model, user‘s model and system image) adapted from Norman (2002); 3-layers of user experience adapted from Jenson (2002)

Design parameters of adaptive systems

To understand which parameter of an adaptive systems can be adapted, we have to look more closly on the 3-layers of UX. Therefore I recommend to use the six layered human-computer interaction model by Herzeg (2006). Herzeg (2006) proposed to uses five of the six layers to describe the design parameters of an adaptive system. He recommended to leave out the intentional layer, which change the task structure, because this would change the whole system functionality and therefore will not be considered for personalization Herzeg (2006). The five layers which describe the design parameters of an adaptive system are:

  • The lexical layer: change of information coding
  • The semantic layer: change of interaction objects
  • The pragmatic layer: change of interaction flow
  • The syntactic layer: change of interaction
  • The sensomotoric layer: change of input & output modalities

Inspired by the 3-layer model of user interface adaptation aspects from Zimmermann et al. (2014), I propose a 3-layer model of adaptive UX which covers the five layers of adaptive design parameters from Herzeg (2006), populated with some examples. It is to note that the sensomotoric layer covers two layers of the 3-layer UX model, the infrastructure and presentation layer as these two layers cover the input and output modalities of a system.

Aspects of the upper layer (presentation layer) are usually more likely to be adaptable. This can be done by interface parametrization which enables the user to modify the presentation within a set of fixed parameters developed and implemented at design-time Zimmermann et al. (2014). Aspects in the middle layer (task layer) and lower layer (infrastructure layers) are more likely to be done by interface integration prepared at design-time but implemented at run-time on the user’s behalf (Zimmermann et al. 2014).

The 3-layer model of Adaptive UX proposed in this chapter provides the basis to understand the parameters of the answer to the question: “what to adapt?”. It is now up to the individual project requirements to find a valid answer to this question, as there is no such thing like a „universal answer“ to to this question.

Figure 37: 3-Layer model of Adaptive UX; Inspired by the 3-layer model of user interface adaptation aspects from Zimmermann et al. (2014); and the six layered human-computer interaction model by Herzeg (2006)
Figure 37: 3-Layer model of Adaptive UX; Inspired by the 3-layer model of user interface adaptation aspects from Zimmermann et al. (2014); and the six layered human-computer interaction model by Herzeg (2006)

Which goals should be promoted?

The main purpose of user interface adaptation is to create a clear, straightforward and convenient user experience (Wesson et al., 2010). To decide proper on the scope of adaptivity, we must be aware of the challenges adaptivity can bring with it.

Challenges of adaptivity

Various authors illustrate that adding adaptivity to an interface can bring some serious negative side effects. Yang et al. (2016) submitted that adaptive UIs can, rather than improve the user experience, feel inconsistent and distant rather than personal. This results in an increase in cognitive workload (Zimmerman et al. 2007) and a decrease in the user’s sense of control (Höök 2000 & Weld et al. 2003). However, the goal of adaptivity is to improve the usability of a system. Due to Adaptive UIs tendency to violate the good old usability principles developed for direct-manipulation software, most negative side effects are considered to be usability challenges that have to be solved to fully unleash adaptivities’ potential (Höök 2000). In the paper “Steps to take before Intelligent User Interfaces become real” Höök proposes three categories of challenges, (1) control, transparency and predictability, (2) privacy and trust and (3) the secondary effect of treating systems as fellow beings.

Control, Transparency and Predictability

Höök (2000) lays out that in order to make a system predictable, the system has to provide the user with some sort of control over the system, so that it always gives the same response resulting from the same input. Predictability helps the user to understand and build a concept of the system’s inner workings. When a system gets predictable it will automatically get transparent. Höök (2000) further explains that systems that adapt to their users and change over time to better fit users’ needs will be necessarily violating the usability principle of predictability and have a high chance to be intransparent, what may hinder users’ control over the system. Lieberman (2009) proposes in another paper, that due to the complexity of AI algorithms used to provide adaptivity, interface design needs to pay special attention to transparency and explanation. Since this technology is “smarter than the user”, Lieberman further proposes that those systems may need to spend some time on communicating the process and how this can serve the user. Further usability research on transparency and predictability points out that the user is capable of understanding and predicting system behavior on three different levels of detail. According to Jameson, those three levels are (1) exact layout and responses, (2) success at specific subtasks and (3) overall competence.

Predictability and transparency can be fostered when e.g., menus, icons, and lists function in the exact same way over and over again. This helps the user to get in automatic processing (Hammond 1987). Automatic processing means a user can handle parts of the interface quickly, accurately, and with little or no attention (Jameson 2008). This kind of static behavior and high predictability is important when interface elements are frequently accessed by a skilled user, e.g., icons in a control panel or options in a menu (Findlater & Gajos 2009). A comparison of the results of two studies, which not exactly compare the frequency of adaption, shows that high frequented change in adaptive behavior decreases the efficiency and satisfaction of users due to a lower predictability (Sears & Shneiderman 1994, Findlater & McGrenere 2004). This illustrates that there is a need to consider frequency of the adaptation as a factor which can influence the predictability and transparency of a system.

According to Jameson (2008), predictability and transparency are more important to a user if the system is performing a more or less complex task on the user’s behalf (e.g., searching for suitable products on the web). Jameson further proposes that the most universal form of predictability and transparency is due to the competence of a user to access the systems’ capabilities. If the user underestimates the system capabilities, he never will be able to fully use the system’s potential. If the user overestimates the system capabilities, he will rely too heavily on the system. According to Lieberman (2009), adaptive UIs may require more explanation of the interface or features in order to ensure that the user fully understands its capabilities. Höök (2000) believes that fear of upset users due to not providing them with insights into how a system works, is not a problem that is unique to Adaptive UIs. As long as a system works like it is expected and designed to work there is no need for a complete model of how this system works. For us, who are not computer scientists, most of the applications work like this today.

Maes (1997) points out that the same argument about predictability and transparency can be made about driving cars. Most of the car drivers don’t have a complete model of how the engine or the brakes work. He points out that people still manage driving cars really well without these complete models. As the complexity of the systems we deal with is increasing, it is necessary to understand that most users can’t be bothered with a complete model of the system. For example take an automatic brake system in a car. You only have to predict that it does what it is supposed to do – brake. And to brake when you think this action is reasonable, e.g., to prevent you from a crash. This is because most of the action performed by an adaptive system are proposed by the system and happens before the user decides to take action. Users have to find that the action can be understood by them and therefore can be predicted. This will be reached if the action is performed on the user’s behalf over and over again.

If an action is comprehensible to a user, the chances are high to overcome the users expectations and to result in a higher satisfaction. Therefore, predictability may not be the right term here. Lieberman (2009) argues that reasonableness of behavior is a better criterion, rather than predictability by itself because most of today’s complex commercial software is already non predictable. Furthermore she uses the example of placing a picture into a document in a word processing application like Microsoft Word and asks the question if you can tell where exactly the picture is prompted when imported (Lieberman 2009). Therefore the system has to explain itself, e.g., it can give a response before the action is done. This benefits transparency which can result in reasonableness of the system’s behavior to the user.

Privacy & Trust

Besides the user‘s ability to predict and understand the system‘s behavior and feel in control, the domain of Adaptive UIs has to deal with a topic which got also quite popular for conventional user interface design, trust and privacy concerns. Every system that adapts to an individual user has to acquire data. Hartmann (2009) recommends that the algorithms for providing adaptation have to be able to deal with as few data as possible and try to not change user behavior, to protect user’s privacy sufficiently. The data collected must be just enough to provide accurate support when needed, because even when the system simply fails once this can have a major negative effect on the user’s trust in the system (Hartmann 2009, Höök 2000).

This can be partly a question of culture, because once users accustomed to have adaptive systems supporting them, users will gradually build models of how they work and when they can be trusted (Höök 2000). Besides, the user has to accept that the system builds some kind of user model to provide adaptation mechanisms. These models can be formed using data that (a) directly identify the user, e.g., demographic data or bio data like the fingerprint, and (b) data that is related to preference, taste and behavior, e.g., history of search queries or data about system settings. This data can be stored on the device or elsewhere, e.g., on other users devices (social data) or on cloud servers. Therefore, the method of storing has to be made clear to the user, as the storage of data on other devices than the user‘s own can result in major privacy concerns. The concerns about privacy may vary depending on the kind of data acquired. However, when not described properly where the data is stored, the concerns may hinder the user from using the system or rely on its support. For example, this is why Apple explicitly referred to storing the data about the user fingerprints on the user’s own device in a secure layer, while presenting the Touch-ID feature and sensor back in 2013.

“All fingerprint information is encrypted and stored securely in the Secure Enclave inside the A7 chip on the iPhone 5s; it’s never stored on Apple servers or backed up to iCloud®” Apple Press Info 2013

A good framework of four factors in personalized systems that impact privacy is provided by Cranor (2004). The first factor is, how much of the data collected is explicit or implicit data. This means data that is explicitly provided by the user, e.g. demographics, or data that is implicitly provided, e.g, history of search queries. The more of the data is collected implicitly, the more likely the system is violating privacy. The second factor is the duration for which the data is stored. The focus on storing data only for task- or session-focused personalization is more likely to protect privacy, as persistent profile-based personalization requires long term storage of personal data, e.g., in a user’s profile, can result in more privacy concerns. The third factor is the user’s and system’s involvement in initiation of personalization. User-initiated personalization is more privacy friendly than system-initiated personalization due to the fact that users feel more in control. Finally, the fourth factor is whether or not the personalization is based on predictions of social data, or precise content based data. The more the data is content based, e.g., recommendations are close bound to the current task and support the user, the less likely it will violate the user‘s privacy, because the data used is from the task and not the user. The recommendations from Cranor provide a general understanding of how those factors influence privacy. However, these are expectations from Cranor and have to be reviewed for every user-case individually.

Treating as fellow beings

Adaptive UIs are often seen as companions because of their tendency to be smarter than the user and are therefore able to offer support. This often results in the user predicting the system to take charge in what the user did before, which is not the case. No matter how smart those systems are, we have to remind ourselves that they are still computer systems that can fail from time to time due to human failure and that they can’t be blamed for something they are not responsible for.

Therefore Jameson (2009) mentioned that it is especially important how and in which way the adaptive part of the system is presented to the user. He further proposes that an anthropomorphic appearance can trap the user to grow false expectations . So we must be aware of not imitating human-human communication, because those systems are no humans.

“Here we must be careful of imitating human-human communication and assuming that that would be the best model of performance. Agents or adaptive systems should be implemented so that they become an integral part of an otherwise well-designed system.” Höök 2000, p. 6

One popular example of an AI that tried to mimic human-human communication is Microsofts Twitter Bot Tay. Tay the company described as an experiment in „conversational understanding.“ The more Twitter users chatted with Tay the smarter it got, or at least it seamed like it for few hours. It took Tay less than 24 hours to corrupt and turn from “humans are super cool” into an anti feminists and racist bot (see Figure 38), forcing Microsoft to shut down Tay not even 24 hours after the experiment launched (Vincent 2016). This example illustrated that AI technology isn‘t quite ready to imitating human-human communication yet. Therefore we must be carful and design this systems in a way we don‘t consider to treat them as fellow beings, because they can‘t take credit or responsibility.

Figure 38: Some of Microsoft Tay‘s tweets on March, 23 - 24 2016
Figure 38: Some of Microsoft Tay‘s tweets on March, 23 - 24 2016

What kinds of data?

When we try to answer this question we first have to understand the different ways in which an application gathers data.

Data gathering

This can be done in two ways, (1) by explicit asking and (2) by implicit collecting (Alvarez-Cortes et al. 2009). Last data gathering method needs more interpretation of the data collected in order to build a valid user model. Explicit data collection is done by prompting the user‘s question to various informations the application needs in oder to build a user model, e.g. Apple Musics on boarding process while user’s first time experience (see Figure 39).

Figure 39: Onboarding process, first time experience of Apple Music
Figure 39: Onboarding process, first time experience of Apple Music

Implicit data collection is mostly done by the devices sensors by tracking the user’s interaction behavior (Alvarez-Cortes et al. 2009). Here’s a list of sensor categories which can gathered different data in Apple’s latest iPhone (iPhone 7):

  • Location sensor: precise location coordinates from GPS
  • User orientation sensor: directional heading from a digital compass
  • Touch sensors: Multi-touch input from one or more simultaneous gestures
  • Light sensitive sensor: Ambient light detection
  • Device orientation & motion sensor: from built-in accelerometer
  • Proximity sensor: device closeness to other objects/devices or people
  • Audio sensor: input from a microphone
  • Image & video sensors: capture/input from a camera (all kinds of data can be gathered from real-time visual information)
  • Device sensor: through Bluetooth, Wireless Lan
  • Audio broadcast sensor: FM transmitter (rumored on iPhone)

The design of the algorithms that make use of this quantity of data, will be the next step in human-centered design, as Harry West from Frog Design argued in an interview with Fast Company (Woods 2017).

Application not designed with adaptivity in mind can suffer from irreplaceable defects in data which prevent the design from improving. Due to the application Yang et al. (2016) redesigned with adaptivity wasn’t designed with adaptivity in mind before, they encountered two problems. They encountered, first that their system did not log all the information they needed. And second, some logs contained poor quality data. Both of these problems produced irreparable defects in the data and therefore prevented the application to provide an appropriate adaptation and a delightful user experience.

As Gottfried Zimmerman (#00:43:00-0# - #00:43:12-0#) mentioned in an interview for this thesis: “yes collecting data is probably important but there‘s no automatic way […] of designing a good user interface and then finding good adAs Gottfried Zimmerman (#00:43:00-0# - #00:43:12-0#) mentioned in an interview for this thesis: “yes collecting data is probably important but there‘s no automatic way […] of designing a good user interface and then finding good adaptivity mechanisms”. One approach to provide good adaptivity mechanisms is to introduce an explicit exploratory design phase that helps the designers to understand the possibilities for adaptation and the trade-offs among the various possible solutions (Zimmerman et al. 2007, Spaulding & Weber 2009). However there are methods that lets designer’s evaluate adaptivity at prototyping stage, like include a human as part of the prototype, employing the well-known Wizard-of-Oz method (Horvitz et al. 1998). There is a lack of good tools for designers to (1) find out about the data needed, (2) the possible solutions and (3) the trade-offs among the possible solutions (Zimmermann #00:07:07-0#-#00:08:18-0#, and Yang et al. 2016).

High-level – low-level tasks

Because one of the opportunities for adaptive systems are to support with filtering and dealing with information over-load (Alvarez-Cortes et al. 2009), activity recognition – trying to understand what high-level activities people are performing from sensor input – is becoming an active area (Lieberman 2009). Therefore I developed a method called High-Level Low-Level Task Framework (HL Framework). The following example of a user deleting emails, will illustrate the usage of the HL Framework and how it can help the designer find out about, (1) the data needed to collect, (2) the possible solution to support with information over-load and (3) possible tradeoffs among the various solutions.

HL Framework

The HL Framework starts with the designer recognizing a function within the application that fulfils a task on its own (low-level task) but in a sequence of actions can be used for different purposes (high-level tasks). In this case the designer focuses on the function to delete an email (the low-level task). The frequency of this action can lead to different intended goals (high level tasks), e.g., cleaning the inbox, deleting all newsletters from X. Now it is the question: „What’s in between?“

Figure 40: What is in between?
Figure 40: What is in between?

The HL Framework proposed has four stages in between. First, interaction, the way of which the action was activated, e.g., by keyboard command or gestures. Second, the objects the user is interacting with, in this case the email itself and the inbox. Third, the attributes of those objects, e.g., author, category, subject of the email. On this stage the first predictions can be made by the designer, e.g., about the categories an email belongs to. Out of the four stages are patterns emerging, e.g., the user is only deleting emails from a specific author. This patterns lets the designer make guesses about the high level tasks the user may want to accomplish. The high level tasks can be used to provide functionality for an adaptation that supports by filtering and dealing with information over-load, e.g, a shortcut to delete all newsletters from author X (see Figure 41).

Figure 41: Example of the HL Framework in use. Use-case: delete email.
Figure 41: Example of the HL Framework in use. Use-case: delete email.

Conclusion

This method helps designing supportive adaptations that prevent users from getting stuck in “target practice”—long chains of repetitive direct-manipulation operations to accomplish high-level tasks (Lieberman 2009). We need more methods like this, in order to make the idea of adaptive UIs a standard part of UX design and development (Yang et al. 2016). It is to note that the outcome of this method is not production ready. There is a high need for the proposed support functions for identified high level tasks to be tested and iterated on with adaptive suitable methods, e.g., Wizard-of-Oz, at prototyping stage. However this method provides an understanding on how we can answer the question: “What kinds of data (information) should be mainly considered during the adaptation process?” for each project individually.

What are proper rules to manage adaptation?

As stated before, adaptive systems strive to deliver a great or even greater UX than non-adaptive, static or adaptable systems. To describe the challenges of an adaptive system face to deliver a great or greater UX, the UX Honeycomb can be taken as a basis.

UX Honeycombs

The User Experience Honeycomb were original developed by Peter Morville in 2004 to describe the facets of the User Experience (UX) in Webdesign. It was refined by James Melzer in 2005, by exchanging the position of two honey combs. He exchanged the positions of accessible and credible to have a clear division between two key areas of UX. The UX Honeycomb first developed for Webdesign can be seen as a universal description of the facets of UX. Melzer‘s refinements made this model even more universal.

Melzer (2005) considers the left side of the Honeycomb to be the three facets of affordance. He points out that in fact to support a user, a thing must be usable, findable and over all accessible. To be usable to a user a UI has to be easy to use. Sounds so tangible and necessary, but often isn‘t considered sufficient enough. To be findable the design must strive to design signifiers (define signifiers before) of objects and locate them the way users can find what they need. For an Interface to be accessible, a wide range of users with all kind of perceptions and different abilities (e.g. disabilities) have to be considered to use the Interface. In summary; „affordance answers the question: can the user find and use it?“

Furthermore, Melzer (2005) calls the right side the utility side. The three facets for an experience to become utilized are useful, desirable and credible. To be useful, an interface must be capable to bring the user one step further. To make an interface desirable, the designer must appreciate „the power and value of image, identity, brand, and other elements of emotional design“. The last facets towards utilization of an interface is to be credible and trustworthy. Projects like the Web Credibility Project are a beginning in understanding the design elements that influence whether users trust and believe what a system tells them. Melzer (2005) states: “Utility answers the question: does it fulfill the users need or desire?”

In the middle of the honeycomb is the key facet of experience design. Both authors describe that in order to become a great user experience, the experience has to deliver value to its user. This value “is derived from these two outer concepts, along the lines of utility + affordance = value” (Melzer 2005).

Figure 42: UX Honeycombs (Morville 2004, Melzer 2005)
Figure 42: UX Honeycombs (Morville 2004, Melzer 2005)

Adaptive UX Honeycombs

The six major challenges an adaptive system has to face to deliver a great or greater UX than other systems are captured in the Honeycombs and can be mapped onto the following categories:

  • Usable: Quality of data. To keep an adapting system usable you have to prevent a user from putting in garbage data. Because if the user is able to put in garbage through his behavior and wrong interpretation of the system the system only can spill out garbage. This process is called “Garbage in, garbage out” (GIGO). To keep a system usable adaptive systems has to take a close look at the data analysis and interpretation over time, so the system may not take a wrong path and from there on the experience continuously is getting worse.
  • Accessible: Transparency & revocability. Interfaces which make changes or deliver content without a user‘s behalf have to be transparent about where the data for this change or deliver comes from. You may consider the decision of the system undoable on behalf of the user. This challenge goes hand in hand with the next one.
  • Findable: Reasonableness. Helping the user to understand the conceptual model behind decisions the system took is crucial to deliver a great experience to the user. Together with transparency and revocability it might be highly recommended to make it reasonable for the user that the system has done something without his behalf.
  • Useful: Efficiency. Adaptivity has to support the user to become more efficient than without the adaptive methods embedded into the user interface; e.g. support the user with automatic generated shortcuts for certain tasks provide him a faster way to his goal.
  • Desirable: Delight. An interface with adaptive methods can delight a user by giving him for example: information, content or support at the right time. If the information delivered, delights the user, e.g. overcome his expectations, the chances are high that the system gets desirable over time.
  • Credible: Trust & Privacy. To provide trust in a system and low privacy concerns from the user is a huge challenge when it comes to adaptivity. Because most adaptive systems collect a wide variety of data to adapt to an individual user, a low concern in trust and violation of the privacy is crucial for the user experience when using adaptive methods in an interface.
Figure 43: Adaptive UX Honeycombs; inspired by UX Honeycombs from Morville 2004 and Melzer 2005
Figure 43: Adaptive UX Honeycombs; inspired by UX Honeycombs from Morville 2004 and Melzer 2005

If all challenges are solved right, there are good changes a user is more likely to trust the system and have low concerns in the violation of his privacy. This is especially true if the quality of the data is valid, the behavior of the system is transparent and the user is still in control at some level or if not, the behavior or changes of the system are at least reasonable to this user. Therefore this model provides six answers to the question: „What are proper rules to manage adaptation?“