2) Historical background of DAS

0
23

This chapter outlines the journey of DAS from scrappy experiments to a lean digital layer that turns everyday actions into analyzable signals. DAS emerged from experimental research using smartphones to collect data from students at a Master’s programme in entrepreneurship at Chalmers University of Technology in Sweden. You’ll see why lightweight IT mattered—not to add bureaucracy, but to standardize prompts, tags, emotion scores, and enable fast feedback. We also connect to theory of science-based historical roots (i.e. Design Science, Critical Action Research, Experience Sampling, Critical Realism) to see how they help explain why DAS works the way it does. Key lessons: start tiny, reuse content packages and build communities that work together in iterative experiments. History here is practical: constraints shaped the DAS craft, historical roots help explain the resulting DAS practice.

It all started at a Master’s programme at Chalmers in entrepreneurship. It was an extreme case in terms of action-orientation, impact and emotionality. Students in teams of three were asked to create a real-world technology venture together with idea partners, often a research group at a technical university. This frequently triggered an emotional roller-coaster of identity-changing transformative learning, which in turn led to strong development of students’ entrepreneurial competencies. I already knew about these powerful effects, since I had graduated from the programme back in 2001. It changed my life trajectory completely. The teachers turned me into an entrepreneur, a co-founder in the transport industry, continuing to run our project after graduation as a much too inexperienced CEO. After a decade working at what became a fast-growing and successful IT company, I nevertheless got tired of trucks and returned to do a PhD with the de facto research question: “What the #*&% did those teachers do to me?”.

When it was time to collect data, I used some tricks I had learned in the IT industry. I wanted to increase the quality of the data and save some time. Thirteen of our students agreed to use a social media-like app I built for them. They used it for nine months to write a reflection whenever they had a strong emotional experience related to our entrepreneurship programme. Each time, they answered two simple questions: “How do you feel?” and “Why do you feel like this?”. We could then have a digital conversation around specific events in their learning journey, much like the comment thread on regular social media platforms. These app reports were also fed into longitudinal interviews which were transcribed. The resulting mixed dataset consisting of both text and numbers became remarkably fine-grained and rich. 

Findings unexpectedly showed that it was not the creation of a venture that triggered the strongest learning, but rather the altruistic acts of creating value for others (Lackéus, 2014, 2020b). An innovative research method had revealed a largely unknown and long neglected causal mechanism in student learning (cf. Freinet, 1948). This led to the articulation of a novel pedagogical approach we called Value Creation Pedagogy: let students learn through creating value for others (Lackéus et al., 2016). Equipped with new methodological “eyes” to see with, we indeed saw new things of value for many others (cf. Bansal et al., 2018).

Iterative improvements of DAS

After a few years of method experimentation involving a growing number of people, we also added a design step. A group of vocational teachers in the beautiful city of Uddevalla put us on this track, since they found the app very useful. They requested an action feature to better guide their upper secondary school students’ learning journeys when on apprenticeship placements. The easy part was to build the IT function into the app. The difficult part was the non-IT. How to design good action-reflection tasks? We tried out many different sets of 5-15 micro-actions that students could be asked to carry out during their practicums and then reflect upon. Some actions worked better than others, and there was indeed a method to it. I also experimented for a decade with a couple of action-reflection sets for my own students, see Lackéus (2024). 

The design step further strengthened the DAS method, since the resulting data became more cause-effect oriented. It also reduced the need to conduct interviews. We got the sought-after insights directly from the written reflections, since they became deeper and more specific. Which actions led to which learnings according to the students? Which actions were popular, difficult, uninteresting, emotional, failure-prone, etc., and why? In each class, there was a sufficiently large group of students who eloquently could put words on their learning journey.

We also invited the students to the analysis afterwards, in the spirit of Wilhelm von Humboldt, the father of the modern university (cf. Shumar and Robinson, 2018). He famously proposed teachers and students to work collaboratively in the quest for new knowledge. We applied generative AI to produce the analysis reports used in the co-analysis together with students, asking them to meta-reflect: “How did I become more entrepreneurial?” or “What role did emotions play for my learning?”. AI has worked well for this purpose, and has made DAS much easier to use, especially for practitioners without a PhD degree or scientific data analysis experience. Fifteen years after the emotion-focused start, we now have a mature research methodology labeled DAS and a new kind of digital research instrument – an app called Loopme, see Loopme.se. This app defines a new category of digital tools we chose to label Scientific Social Media (SSM) tools (Lackéus 2020a). Both the method and the app instrument are widely used around Sweden, Europe and globally (e.g. Tjulin and Klockmo, 2023; Hmama and Rih, 2025; Derre, 2023; Larsen and Neergaard, 2024; Morland & Lever, 2024). We have also published a journal article and a book about the method (Lackéus, 2021; Lackéus and Sävetun, 2025), and made a training programme available for new DAS users (Lackéus, 2025).

Four research traditions that help explain how DAS works

Designed action sampling (DAS) is an amalgam and innovative development of four different research traditions. First and foremost, the focus has been on taking action together with practitioners, aiming to help them in their daily efforts to create value for their beneficiaries such as students, colleagues or customers. This is grounded in a Clinical Action Research (CAR) tradition (Coghlan, 2009; Schein, 1993). Second, such help has been given through a careful co-design and written specification of practical activities or “tasks” that are then tested in social experiments by participants such as teachers, students and colleagues. This is grounded in a Design Science Research (DSR) tradition (Dresch et al., 2015; Romme, 2003). Third, data collection in DAS is done through asking all participants to quantify and reflect in writing upon any effects they saw (or not), immediately after they carried out an action task. This is grounded in an Experience Sampling Method (ESM) tradition (Hektner et al., 2007; Stone et al., 2003). Fourth, the overarching ambition of DAS is to study causal mechanisms on a micro-level in a context-sensitive way, aiming to uncover “what might work, for whom and why” (Brentnall et al., 2018, p.406) rather than to arrive at macro-level universal laws for what “works” for all. This is grounded in a Critical Realism (CR) tradition (Little, 1991; Sayer, 2010). I will now present these for research traditions briefly.

Clinical Action Research (CAR)

Action research is a broad family of research approaches where researchers enter a setting with the dual purpose of helping practitioners with their real-world problems while at the same time collecting data that may contribute to furthering social science (Coghlan & Shani, 2014). Action research rests on the basic assumption that only by trying to change a social system can one fully understand it (Lewin, 1947). Hidden mechanisms become visible that would not have revealed themselves for a mere observer at a distance. Action research is often undertaken in action-and-reflection cycles, comprising plan-act-observe-reflect-revise, allowing for theory and practice to inform each other (Altrichter et al., 2002). This, however, entails many challenges, such as how to generalize beyond the immediate setting, how to publish generated insights in scientific outlets, and how to avoid vague and fuzzy working procedures (Wedekind, 1997). Therefore, an action research tradition may well benefit from being combined with other research traditions.

One such tradition is adopting a clinical posture. Medical doctors are not alone in doing clinical work. Also social workers, lawyers, consultants, coaches, managers and certainly teachers engage in clinical work, if we by clinical mean a situation where someone is being asked to help another human being – a client – in a relational way (Schein, 1993). What differentiates clinical action research from regular action research is that “the researcher comes into the situation in response to the needs of the client, not his or her own needs to gather data” (Schein, 1993, p.703). This changes the psychological contract significantly for people when asked to participate. A regular research-oriented purpose can in fact deter participants from wanting to take part at all, dismissing outsiders’ attempts to help them develop their practice as unsolicited advice (Farber, 1999, p.226).

Design Science Research (DSR)

DSR is a new and fast-growing research tradition with roots in work by Nobel laureate Herbert Simon (Dresch et al., 2015). In his famous book The sciences of the artificial, Simon (1969) argues that a different kind of science – a design science – is needed in professions where emphasis is on creation of artificial objects and situations, that is, artifacts created by humans rather than naturefacts existing in nature (Hilpinen, 2011). When teachers, engineers, architects and other professionals create things and situations that did not previously exist, they are in fact conducting design work. Simon contrasts design of what ought to be against describing and analysing what is innatural science. DSR thus represents a more prescriptive and problem-solving research approach of “changing existing situations into desired ones” (Romme, 2003, p.562). Compared to the action research approach, DSR adds a focus on novel artifacts that researchers co-create together with practitioners in attempts to solve their practical problems (Holmström et al., 2009). Such artifacts can help bridge a problematic theory-practice gap (Van de Ven & Johnson, 2006).

In DSR, it is mainly the so-called design principles that help bridge between theory and practice. They do this by offering a tentative written-down prescriptive template for how to solve a practical problem of a certain type (Denyer et al., 2008). A design principle is often articulated according to a CIMO-logic, describing Context, Intervention, Mechanism and Outcome. The aim is to specify “what to do, in which situations, to produce what effect and offer some understanding of why this happens” (Denyer et al., 2008, p.396). CIMOs can be developed from literature, from practice, or as a combination. They always need to be validated through field tests in practice. Since design work is often done in a complex organisational setting, it usually results in multiple closely related design principles (Romme & Endenburg, 2006). A collection of carefully developed and tested design principles represents an immaterial artifact that practitioners can use and share more broadly (Dimov et al., 2022). We end up with a reification – something abstract having been made more concrete, in this case, a piece of codified, transferable and pragmatically validated actionable knowledge about “what might work, for whom and why” (cf. Brentnall et al., 2018, p.406).

Experience Sampling Methodology (ESM)

ESM is a method to collect an immediate and momentary detailed portrait – a sample – of people’s thoughts and feelings as they take action in their natural environment. Participants are asked to provide written self-reports by completing a very short questionnaire multiple times over typically a week or two (Larson & Csikszentmihalyi, 1983). An ESM questionnaire can consist of questions such as “What were you doing right now?”, “How do you feel about it?” and “Why do you feel like this?”. Some questions are scale-based and others are open text-based, resulting in a mixed dataset consisting of both text and numbers. Questionnaires are completed either at random times through help from mobile technology, at certain time intervals such as daily, or immediately after certain events such as after an interpersonal conflict (Reis et al., 2014). ESM offers much higher data validity and situational precision than after-action interviews or surveys that often suffer from recall bias (Stone et al., 2003).

Critical Realism (CR)

CR is a philosophical position that bridges between two extremes – rigid objectivism (a search for universal truths) and fuzzy subjectivism (each individual has to find her own truth). Such bridging is not achieved by taking a diluted middle-ground position, but through taking a multi-paradigm stance where two or more incompatible positions are maintained at the same time (Patomäki & Wight, 2000). According to CR, there is indeed a reality out there, independent of the observer. Still, this reality is not easily measurable or knowable since it is partly socially constructed and also impacted by humans with their varying desires and emotions (Easton, 2010). What social scientists then need to do is to try to identify weak regularities on a micro level (Danermark et al., 2002). Such regularities are context-dependent, and need to be studied case-by-case (Bhaskar & Danermark, 2006). In CR, these regularities are termed causal mechanisms. Ylikoski (2019, p.16) writes that ”a mechanism-based explanation tells us how the cause produces its effects by describing the process by which this happens”.

Causal mechanisms are unpredictable in that they are “sometimes active, sometimes dormant”, and that there are often multiple counteractive mechanisms at play (Danermark et al., 2002, p.199). Social scientists therefore need to be critical to macro-level descriptions of what things are like, and instead drill deep down inside the black box of society’s internal machinery, digging out those real causal mechanisms that explain why things are the way they are in each particular case (Elster, 1989). This is why it is labelled critical realism – reality is indeed out there, but it needs to be painstakingly dug out and explained through a broad variety of available means. One approach of particular relevance here is to stage social experiments where “the researcher consciously provokes a situation in order to study how people handle it” (Danermark et al., 2002, p.103). CR also recommends a reliance on mixed methods (Danermark et al., 2002, p.150-176).

Historical development of Loopme

LoopMe began in 2012 as a very simple web survey placed as a shortcut on the smartphone “desktops” of 13 entrepreneurship students at Chalmers. Participants were asked to respond whenever something emotionally salient happened in their education, and over 18 months they submitted 556 short reports. The aim was research-driven—linking emotion-laden moments to developing entrepreneurial competencies—and the tool primarily served to enrich later interviews. A small follow-on study in lower secondary schools in 2013 broadened the context.

In 2014, a professional team at the social venture Me Analytics AB rebuilt LoopMe to support larger, commissioned studies by the Swedish National Agency of Education. This second version introduced a social feed of submitted “loops,” predefined tags that made experiences easier to label and analyze, and leader–participant chat around specific entries. With these changes, LoopMe could scale to a few hundred users and generate more structured, timely signals.

A third version followed in 2015 with a redesigned interface and several practical features: grouping and filtering of loops, notifications, invitations, profile management, teacher mini-surveys and information loops—and a mandatory emotion rating for each entry. Research use expanded, but a structural limitation became clear: because posting was voluntary (like social media), only about 15–30% of any group typically contributed, and activity tended to fade when a study ended. Usage churn eroded the user base and, by spring 2016, threatened the social venture’s viability.

The response was a strategic pivot. LoopMe was rebuilt from the ground up in 2016 with mandatory tasks at its core, clarifying what participants were expected to do at least once and when. To support study leaders, the team created reusable “content packages”—ready-made sets of 3–20 tasks with associated tags. Paradoxically, centering behavior (tasks) did not dilute research; it strengthened it. By explicitly linking designed tasks (independent variables) to emotion ratings, tags, and written reflections (dependent variables), version 4 produced richer, more analyzable data while sustaining engagement. What began as a tension between scholarly and practitioner goals became a virtuous loop: better participation drove better evidence, and better evidence reinforced practical impact.

Today, LoopMe is a widely used scientific social media (SSM) platform for organizational learning and research. Study leaders design small, concrete tasks and outcome tags; participants perform the tasks in real settings and then use Loopme to submit short reflections with emotion ratings and selected tags via app or web. A private social stream in Loopme lets study leaders comment quickly, while the system aggregates both qualitative and quantitative data for analysis. Causal patterns between tasks and outcomes are surfaced through action-by-tag heatmaps and clusters of quotes. In short, LoopMe turns DAS cycles into a scalable and time efficient everyday practice.

LEAVE A REPLY

Please enter your comment!
Please enter your name here