Programmable Interactions

Interactions are fundamental for operating within the world. They come in many forms, taking place everywhere and on various levels. At the very least, to interact with the world an entity needs to be able to detect events, then interpret and act upon them. Altogether, these give a meaning for the interaction. An interaction can be considered a process that consists of sensation — input from the world, anticipation — what events are expected from the world, adaptation — how to react to unforeseen events, and action — output to the world. Whereas these four apply to humans [Neisser76], similarities can be found in how computers interact with people as well as other computer. Today people live in digital world, and interactions are increasingly taking place with computing. Irrespectively, however, the interactions between computers come from an entirely different dimension than the interactions between humans, and using the technology requires a lot of interventions from humans. Consequently, interactions within computing take place on various levels that remain separate from the way people are accustomed to interacting.

Programmable interactions approach computing from a different perspective: the idea is that the interactions themselves should play a key role in computing, and they should follow a fresh new set of principles. From the software development perspective, the focus is then on implementing enablers that support the interactions between co-located machines, humans, and the world — as opposed to developing traditional software applications, where a sole user must pick up the device, open an application, and then actively start interacting with it by giving input.

Figure 1. Programmable interactions put into context.

Motivation

The importance of interactions has been recognized from the beginning of the information age. For instance, J.C.R. Licklider presented his vision of Man-Computer Symbiosis in the 1960’s, and analyzed some problems of interactions between humans and computers [Licklider60]. A few years later, Licklider introduced his idea of an Intergalactic Computer Network, which eventually led to the development of the Internet [Roberts86], and which now enables billions of people and entities to communicate. On the other hand, in 1991 Mark Weiser presented his well-known vision of Ubiquitous Computing, which described ambiguous interactions between humans and their surroundings [Weiser91]. According to Weiser, new goals can be focused on, when things disappear so that they can be used without thinking.

Programmable interactions approach computing from a different perspective: the idea is that the interactions themselves should play a key role in computing, and they should follow a fresh new set of principles. From the software development perspective, the focus is then on implementing enablers that support the interactions between co-located machines, humans, and the world — as opposed to developing traditional software applications, where a sole user must pick up the device, open an application, and then actively start interacting with it by giving input. According to Weiser, new goals can be focused on, when things disappear so that they can be used without thinking.

With a view to these early visions, at present a lot of talk is around topics such as the Internet of Things [Atzori10], fostered by heterogeneous and new networking technologies such as 5G networks [Gohil13]. These concrete examples show how the paradigm of computing is actually changing at the moment. The world is quickly becoming a place full of computer-enabled objects that are interconnected [Gartner2013Taivalsaari14]. Already now, people with apps on their mobiles can remotely control different entities such as smart home electronics that operate within separate networks. Although these kinds of interactions may become useful if one has a very specific task that needs to be done, this forces the human users to act as servants to the machines. With ever more interconnected and computer-enabled objects, the situation should be the other way round: Humans should be the centerpieces of these interactions, but not in the sense of being operators. The entities should be enabled to interact with each other, and with their joint behavior serve the humans that are present with them in the same space. Current computing infrastructures, however, do not support or encourage implementing such interactions, which leaves much room for improvement.

Looking at today’s app stores for mobile devices, there seems to be an app for nearly every purpose. For instance, Apple’s App Store contains 1,5 and Google Play 1,6 million apps at present [Statista15]. Despite these vast numbers, the apps only employ a single device at a time, and only little, if any, attention is paid to human-to-human, or entity-to-entity interactions. In particular, interplay with multiple entities and humans present in the same physical space is completely sorely lacking.

In the last several years, the concept of ecosystems has been proposed for improving interoperability of devices operating within the same hardware/software environment [Bosch10]. However, these ecosystems do not enable actual interactions between and among the devices in the sense that the devices would actually be playing or co-operating with the users. Instead, the ecosystems are typically targeted to single user, multiple device scenarios, and can lock the user into a single vendor “silo” [Taivalsaari14]. For application developers, ecosystem support is also very limited, and the existing support typically enables synchronizing data over cloud services. Vendors of the hardware platforms also tend to protect their ecosystem businesses, and may even set limitations for their platforms that prevent apps from operating in certain ways. Indeed, this does not promote communication and free interaction with other entities.

Web technologies, on the other hand, have long been based on open standards, and hence offer tools for implementing applications in more vendor-neutral ways — in contrast to native apps where one platform’s apps cannot be run on other platforms. However, even though the communication is built-in to the Web, the interactions still happen in the same way as with with native apps. Also, the Web browser essentially is an app itself and only offers a sandbox for interactions. For these reasons Web apps suffer from the same and even more limitations than native ones. Despite these limitations, however, Web technologies can still have advantages over native apps [Taivalsaari11], and be used for enabling some interactions. Moreover, Web technologies can teach a lot how software should work, and about standardization for enabling vendor-neutral interactions in the future.

Consequently, all this raises the question of whether the current ways of implementing software are feasible for enabling interactions within the modern computing environment? The idea is to approach the interactions from a programming model perspective. The programming model is designed to take the most out of today’s computing environment by utilizing different resources from a diverse set of the computer-enabled devices and heterogeneous networking technologies. Many devices now enable sensations and actions that allow them to interact with humans, their environment and each other. Some of these even go beyond human abilities. The devices, however, need to be programmed to be able to anticipate events and then to react to them. The ability to adapt, on the other hand, can improve over time by observing the users and by learning.

Programmable interactions are based on four fundamental principles according which the interplay between humans and machines should be socialbe personalizedbe proactive, and be predictableFigure 1. above depicts some example scenarios about the programmable interactions in different contexts. These novel interactions are built with a model named Action-Oriented Programming which is especially targeted for implementing the interactions between computing machines that are co-located within the same space with humans. Thus social aspects are an important part of these interactions. The computing machines then respect the social relationships between humans while they are interacting, and they may also appear in human-like ways for the co-located people by interacting with modalities that humans are accustomed to interact with each other. A concrete example could be a car navigator and a mobile phone using voice modality to communicate the user that that the two are negotiating about the destination.

For making the interactions personalized, the information available from cyberspace can be utilized in the physical space. Also, digital content is now an essential part of human life, and thus user-generated digital content is important for making the interactions more personal. Such interactions have then a lot of potential for enriching social behavior and aiding people in their everyday activities. Concrete examples of these types of programmable interactions include sharing life events and activities similarly than people now share them in social media, but now the sharing takes place in face-to-face encounters. The devices may for instance help people find other with similar interests in a conference, automatically exchange contact information, and even help in breaking the ice in some situations. Typical use cases include for example social games.

In many ways programmable interactions the current concept of app — the boundaries have been removed, they are not tied to any specific platform or device, and they are not necessarily used actively by the user. Instead, programmable interactions proactively take place between co-located humans and machines. Hence, these interactions can be considered as some kind of ambient intelligence [Rogers06] since they can be used for changing the state of the physical world based on humans’ preferences and ongoing activities. A concrete example could be adjusting the atmosphere of the physical environment where certain activity, like a social game, is ongoing.

Since people are not accustomed to this type of novel approach where interactions with technology take place proactively, it naturally will take some time to adapt to the new principles. For this same reason, it is important to pay special attention to the predictability of the interactions, and that the users may trust the system. Moreover, it is also important to make the user feel that they remain in control over the technology, and enable them to adjust the level of proactivity.

READ MORE FROM MY PHD THESIS

Leave a Reply

Your email address will not be published. Required fields are marked *