Abstract
In this paper we present a preliminary user study conducted on a walk-up-and-use musical instrument dubbed Collective Loops specifically designed for co-located collaborative interaction for the general public. The aim of this study was to verify that displaying all users’ choices in a shared interface would promote and facilitate user engagement in creative collaboration. Although the results do not confirm our hypothesis, the experiment allowed us to detect a more general design issue with such walk-up-and-use multi-display installations: striking the right balance between the different interfaces in order to release some of the users attention for the benefit of the collaborative process.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
The field of interactive art seized very early on the new possibilities offered by ubiquitous computing. An in depth renewal of forms of aesthetic experiences enabled the creation of new face-to-face collective interactive situations. Such installations became more prominent with the democratization of mobile phones with projects such as Blinkenlights [6] by the Chaos Computer Club in which people could play the Pong game or send messages on a very large public display using their own mobile phones. And Dialtones (A Telesymphony) [10] by Golan Levin et al., TweetDreams [8] by Luke Dahl et al., and Kim Haeyoung’s Moori [13] allowed participants to interact via their phones with a musical performance.
While some of such installations enabled creative collaboration between co-located individuals, most of them relied on some form of human moderation or orchestration to guide the audience’s actions, control the system’s reactions, or rearrange the individual outputs to produce a suitable common result.
Furthermore, co-located collaboration has been a gradually emerging topic in HCI, and more specifically in the CSCW community, but has mostly focused on systems designed to facilitate collaboration between teammates in a working environment [2, 11].
Our study, on the other hand, concentrates on the engagement in a collective and collaborative process, with little to no intervention for moderation or orchestration, around a creative interactive installation. One of the many challenges faced in designing such art installations, intended for co-located interaction by the general public without human moderation, involves the ability to redirect the user’s attention from individual interfaces towards shared interfaces in favor of a collective and collaborative outcome.
Some closely related research has dealt with non-expert user engagement. Brignull and Rogers have, for instance, studied how people can be enticed to interact with a shared public system that allows them to post views and opinions [4], and Edmonds et al. and Bilda et al. have addressed the problems faced by creative installations in terms of audience engagement [3, 9]. Other research have dealt with visual attention issues in multi-display environments [5, 12, 15]. Yet none seem to have specifically dealt with collective experiences in walk-up-and-use systems.
In this article, we present a comparative user study that was conducted on an artistic project entitled Collective Loops during a public event. We expected through this study to better understand the role of individual and shared interfaces in the shift of users’ attention and examine some of the conditions that help lead to a “successful” collective interaction by allowing the progressive coordination of actions in favor of a common satisfactory result.
2 Collective Loops
Collective Loops is a real-time collaborative musical 8-step loop sequencer developed as a prototype project for the CoSiMa [7] (Collaborative Situated Media) platformFootnote 1, in which several institutions and agencies are combining efforts to develop a software platform, based on Web standards, easing the creation of co-located collective interaction projects.
Having been thoroughly described in another publication [16], the Collective Loops project and its setup will only be briefly presented here.
2.1 Concept and Technical Setup
The loop sequencer is comprised of two user interfaces: (1) an individual interface used by each participant through their smartphone allowing them to alter the sound emitted from it, as illustrated in Fig. 1, and (2) a shared, floor-projected, circular visualization (of approx. 3 m in diameter) showing all participants’ choices as well as the current position of the sequencer’s reading head (represented by a bright moving sector), as depicted in Fig. 2a.
The CoSiMa platform, and consequently Collective Loops, is developed using open Web technologies such as Node.js, WebGL, and the Web Audio API. The Collective Loops installation is composed of three main software components: (1) a local server managing inter-device synchronization and communication, (2) a mobile web application accessible via a local Wi-Fi hotspot handling the individual user interfaces, and (3) a web application running on a local machine connected to a video projector handling the floor-projected shared interface.
2.2 Iterative Design
A first version was deployed to the general public during the Ircam Open Days in June 2015, which allowed up to 24 participants to collaboratively create simple melodies using predefined notes of 3 instruments: percussion, bass, and melody. The sequencer was divided into 8 time slots, with up to 3 participants per slot; one per instrument. A slot and an instrument was automatically assigned to each newcomer, who could then modify the note of the instrument by selecting one out of 12 possible choices per the inclination of the smartphone.
This first public deployment allowed us to perform a full-scale test of Collective Loops and detect technical issues that where difficult to evaluate otherwise. It also allowed us to get feedback on interface and interaction design aspects through a qualitative study with interviews, questionnaires, and video recordings.
It was, for example, observed that most of the participants had a hard time figuring out that their choice of note was done by tilting the smartphone back or forth. In addition, many participants felt that their scope of action was very limited, which had a negative impact on their motivation to interact.
Such imperfections impelled us to bring improvements to the project. A second version was thus developed in which a more consistent design was chosen for the individual and shared interfaces, and touch input was favored over gesture for the note selection to better support a walk-up-and-use experience.
This new version supports up to 8 participants, one per time slot. A first touch interface on smartphones allows each newcomer to manually choose a time slot from all available slots. A second touch interface can then be used to alter the sound emitted from the smartphone by choosing notes from all 3 instruments. Unlike the first version, multiple notes from each instrument can be activated by the same participant within certain predefined limits (3/12 melody notes, 1/6 bass notes, and 3/3 percussion notes).
Moreover, the emitted sound intensity can be controlled by tilting the smartphone back or forth, and an added echo effect enriches the experience and makes it interesting enough with a small number of participants.
3 User Study
Previous studies have shown that revealing teammates’ progress to each other can improve collaboration in distributed problem solving situations [1, 14]. Yet, to the best of our knowledge, no studies seem to have been conducted to understand the role of mutual awareness, through shared visualization, on collaboration in a leisure, walk-up-and-use co-located setting. An experiment has thus been designed in order to test the hypothesis that revealing all users’ choices on the common, shared interface would promote a more collaborative experience.
Since the quality of creative collaboration that is not guided by a predefined outcome is subjective, we based the study on participants’ impressions concerning their engagement in the collective process and their ability to create a common satisfactory result. We expected participants to build a better sense of collaboration and engagement when they are able to view each other’s choices, and thus enhancing the collective experience.
Two different designs for the shared, floor-projected, interface (shown in Fig. 2) were tested in this comparative study: (A) participants’ choices are displayed on the interface as well as the reading head’s current position, (B) only the reading head’s current position is displayed. We will refer, in this paper, to the first interface as “NSloop” (for Note Selections) and the second interface as “RHloop” (for Reading Head).
3.1 Context and Setup
The improved version of Collective Loops was deployed for the audience of the workshops and presentations held during the International IRCAM Forum Workshop in November 2016. We opted to perform the study in the context of this public showing, based on the notion that “the site of exhibition can be seen (...) [as] the central site for interactive art research – the necessary starting and finishing point for any study that aims to understand how meaning is produced by an interactive artwork” [9]. The high costs involved in the deployment of such installations was also a deciding factor. Indeed, the setup of installations such as Collective Loops requires a non-negligible amount of time and man power, and the floor-projection requires the mounting of a complex video-projection system.
Furthermore, although the CoSiMa platform aims to be compatible with most smartphones, the current state of web standards’ implementations still requires lending compatible devices to participants. Thus, to facilitate the participation and improve the user experience, users were encouraged to borrow a preconfigured device with a small portable speaker attached to it as shown in Fig. 1c.
3.2 Subjects and Procedure
The audience of the workshops were invited to freely experience the installation during lunch and coffee breaks which lasted between 1 and 2 h, and interfaces were alternated outside of those break times. Human mediation was limited to lending a preconfigured device, or assisting during the connection process on a participant’s own smartphone. Participants had thus no prior knowledge of the system and its musical collaborative possibilities.
After their participation, users were asked to volunteer in the study by filling out a questionnaire. With none of the participants having experienced both interfaces, 36 questionnaires were collected (18 per interface). The subjects who experienced NSloop were comprised of 5 women and 13 men varying in age from 19 to 68Â years old (mean: 38.9, \(\sigma \): 13.1), whereas those who experienced RHloop were comprised of 5 women and 13 men varying in age from 19 to 69Â years old (mean: 36.2, \(\sigma \): 14).
3.3 Data Collection
The questionnaires consisted of two general profile questions (age and gender), ten 5-point Likert scale questions eliciting the participant’s impressions on the individual and collective experience, and a final open-ended question allowing the expression of general feelings and suggestions. We also video-recorded all sessions from above, and stored detailed system communication logs.
3.4 Results and Analysis
The diagrams in Fig. 3 visualize the self-reported impressions of the subjects in the individual and collective engagement in the real-time musical composition with both interface. Although many of the comments left in the open-ended question suggest that the participants who experienced NSloop were more satisfied with the global experience than those who experienced RHloop, the responses in the Likert scale questions do not show a significant difference between both interfaces in the individual or collective experiences (a two-tailed Mann-Whitney test was performed for each question with n1=n2=18, \(P<0.05\), and the U-value and Z-score as reported in the caption of each diagram in Fig. 3).
We can further notice that the peeks in those diagrams shift from right to left as we go from the individual experience questions (top-left) to the collective experience ones (bottom-right), implying that participants were globally satisfied with the individual experience, but did not manage to fully exploit the collective and collaborative aspects of the installation.
Indeed, the charts reveal that most participants deemed that they were able to follow their own actions, but were not fully aware of others’ actions and did not pay attention to others’ choices very often. They also indicated that they rarely communicated with their peers and did not quite feel engaged in a collaborative process.
Data collected from system logs, illustrated in the diagrams of Fig. 4, reveal that the average number of simultaneous participants was slightly lower on NSloop (2.8) than on RHloop (3.1). And, against our expectations, the total time spent per participant on the installation was on average shorter on NSloop (4’27”) than on RHloop (7’05”). This might have been a consequence of the lack of shared information on RHloop forcing one to spend more time to apprehend the system. No significant difference was, however, observed on the number of note selections made by participants per minute between NSloop (mean: 25; \(\sigma \): 17) and RHloop (mean: 22; \(\sigma \): 17) when compared using a T-test (t-value: 0.65; two-tailed p-value: 0.52).
Furthermore, 7 participants indicated that the acoustic aspect of the system was not engaging enough, and would have appreciated to have more choices and control over the sounds emitted. One also noted that the shared projection on NSloop was too prominent, dissuading discussion and collaboration with others, a feeling that was increased by the fact that the sound emanating from others’ devices was not always edible. And another one suggested that the portability of the individual devices was not fully taken advantage of due to the fixed position of the floor projection.
Based on the participants’ subjective evaluation of collaboration, the study did not succeed in demonstrating a statistically significant improvement in the shift of focus from the individual to the collective when all users’ choices are exposed on the shared interface. However, it allowed us to get a better understanding of the multiple design challenges that need to be addressed in walk-up-and-use installations of this type. Some of those challenges involve difficulties encountered by the first participants in understanding their field of action, or making the experience rich enough to encourage longer participation allowing the formation of larger groups.
Additional research will be required to verify the results of this study and better understand the role of individual and shared graphical user interfaces in promoting collective experiences around multi-display co-located interactive environments.
4 Conclusion and Future Work
We presented a comparative user study undertaken on a collective musical installation to verify a hypothesis regarding the role of a shared interface on promoting the user engagement in a collaborative process when interacting with others around a co-located artistic installation.
The described installation implements a multi-display interactive system with smartphones and a shared floor-projection. Users act on their individual device, while the shared projection allows them to visualize the full set of all users’ choices. The system ultimately aims to support the intuitive development of collaboration between participants.
Users are confronted with four sources of attention throughout their participation: (1) the individual interaction on their smartphones, (2) the visualization of all the interactions on the shared projection, (3) the sounds emitted from all the individual devices, and (4) their collaboration with other members of the public. While we wish to promote the latter by means of the individual and shared interfaces, all four seem to be in competition with each other. Indeed, they all solicit, in different ways, the user’s attention. Hence, it seems difficult for participants to pay attention to others if their mind is already occupied by one, let alone two or even three other sources of attention.
Therefore, more research needs to be undertaken to find a balance between those concurrent sources of attention. We could, for example, consider attentional weight of the individual and shared interfaces to balance them, or to dynamically manage their ability to retain or release users’ attention, and thus better control which type of interaction we would like to privilege in order to support the emergence of a collaborative situation.
Such research could help model design patterns for co-located interactive installations that encourage the audience to engage in a creative and collaborative process by building on previous frameworks such as the work of Edmonds et al. in which they describe a model of creative engagement with three attributes: attractors, sustainers, and relaters [9].
Notes
- 1.
The CoSiMa project is funded by ANR, the French National Research Agency (ANR-13-CORD-0010).
References
Balakrishnan, A.D., Fussell, S.R., Kiesler, S.: Do visualizations improve synchronous remote collaboration? In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1227–1236. ACM (2008)
Bardram, J.E., Esbensen, M., Tabard, A.: Activity-based collaboration for interactive spaces. In: Anslow, C., Campos, P., Jorge, J. (eds.) Collaboration Meets Interactive Spaces, pp. 233–257. Springer, Cham (2016). doi:10.1007/978-3-319-45853-3_11
Bilda, Z., Edmonds, E., Candy, L.: Designing for creative engagement. Des. Stud. 29(6), 525–540 (2008). Interaction Design and Creative Practice
Brignull, H., Rogers, Y.: Enticing people to interact with large public displays in public spaces. In: Human-Computer Interaction INTERACT 2003: IFIP TC13 International Conference on Human-Computer Interaction, 1st–5th September 2003. IOS Press (2003)
Cauchard, J.R., Lchtefeld, M., Irani, P., Schoening, J., Krger, A., Fraser, M., Subramanian, S.: Visual separation in mobile multi-display environments. In: Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, pp. 451–460. ACM (2011)
Blinkenlights. http://blinkenlights.net/project
CoSiMa - Collaborative Situated Media. http://cosima.ircam.fr/
Dahl, L., Herrera, J., Wilkerson, C.: Tweetdreams: Making music with the audience and the world using real-time twitter data. In: Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 272–275. Citeseer (2011)
Edmonds, E., Muller, L., Connell, M.: On creative engagement. Vis. Commun. 5(3), 307–322 (2006)
Dialtones (A Telesymphony). http://www.flong.com/projects/telesymphony/
Isenberg, P., Fisher, D., Morris, M.R., Inkpen, K., Czerwinski, M.: An exploratory study of co-located collaborative visual analytics around a tabletop display. In: Proceedings of Visual Analytics Science and Technology (VAST), pp. 179–186. IEEE Computer Society, Los Alamitos, November 2010
Kern, D., Marshall, P., Schmidt, A.: Gazemarks: gaze-based visual placeholders to ease attention switching. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2010, pp. 2093–2102. ACM, New York (2010)
Kim, H.: Moori: interactive audience participatory audio-visual performance. In: Proceedings of the 8th ACM conference on Creativity and cognition, pp. 437–438. ACM (2011)
Paul, S.A., Morris, M.R.: CoSense: enhancing sensemaking for collaborative web search. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1771–1780. ACM (2009)
Rashid, U., Nacenta, M.A., Quigley, A.: Factors influencing visual attention switch in multi-display user interfaces: a survey. In: Proceedings of the 2012 International Symposium on Pervasive Displays, p. 1. ACM (2012)
Schnell, N., Matuszewski, B., Lambert, J.P., Robaszkiewicz, S., Mubarak, O., Cunin, D., Bianchini, S., Boissarie, X., Cieslik, G.: Collective loops: multimodal interactions through co-located mobile devices and synchronized audiovisual rendering based on web standards. In: Proceedings of the Eleventh International Conference on Tangible, Embedded, and Embodied Interaction, TEI 2017, pp. 217–224. ACM, New York (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 IFIP International Federation for Information Processing
About this paper
Cite this paper
Mubarak, O., Cubaud, P., Bihanic, D., Bianchini, S. (2017). Designing Collaborative Co-Located Interaction for an Artistic Installation. In: Bernhaupt, R., Dalvi, G., Joshi, A., K. Balkrishan, D., O'Neill, J., Winckler, M. (eds) Human-Computer Interaction - INTERACT 2017. INTERACT 2017. Lecture Notes in Computer Science(), vol 10513. Springer, Cham. https://doi.org/10.1007/978-3-319-67744-6_15
Download citation
DOI: https://doi.org/10.1007/978-3-319-67744-6_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-67743-9
Online ISBN: 978-3-319-67744-6
eBook Packages: Computer ScienceComputer Science (R0)