Abstract
Purpose
The emergence of artificial intelligence (AI) is leading to a job transformation within the service ecosystem in which issues related to AI governance principles may hinder the social sustainability of the sector. The relevance of AI startups in driving innovation has been recognized; thus, this paper aims to investigate whether and how AI startups may influence the sustainable social development (SSD) of the service sector.
Design/methodology/approach
An empirical study based on 24 in-depth interviews was conducted to qualitatively explore the perceptions of service sector facing AI policymakers, AI consultants and academics (n = 12), as well as AI startups (founders, AI developers; n = 12). An inductive coding approach was used to identify and analyze the data.
Findings
As part of a complex system, AI startups influence the SSD of the service sector in relation to other stakeholders’ contributions for the ethical deployment of AI. Four key factors influencing AI startups’ ability to contribute to the SSD of the service sector were identified: awareness of socioeconomic issues; fostering decent work; systematically applying ethics; and business model innovation.
Practical implications
This study proposes measures for service sector AI startups to promote collaborative efforts and implement managerial practices that adapt to their available resources.
Originality/value
This study develops original guidelines for startups that seek ethical development of beneficial AI in the service sector, building upon Ethics as a Service approach.
Keywords
Citation
Rojas, A. and Tuomi, A. (2022), "Reimagining the sustainable social development of AI for the service sector: the role of startups", Journal of Ethics in Entrepreneurship and Technology, Vol. 2 No. 1, pp. 39-54. https://doi.org/10.1108/JEET-03-2022-0005
Publisher
:Emerald Publishing Limited
Copyright © 2022, Alejandra Rojas and Aarni Tuomi.
License
Published in Journal of Ethics in Entrepreneurship and Technology. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence maybe seen at http://creativecommons.org/licences/by/4.0/legalcode
1. Introduction
Artificial intelligence (AI) is being applied, together with other emerging technologies such as Internet-of-Things and robotics, in an integrated system increasingly referred to as “Intelligent Automation” – a computerized system that can perform tasks that might need no human intervention (Tussyadiah, 2020). In this study, we will refer to intelligent automation as interchangeable with AI. In this view, AI has the agency to restrict, complement and ultimately displace humans in daily routines (Murray et al., 2021). Different opinions exist about the societal impact of AI. Whether it could bring economic growth and new demands or destroy the economy, AI is currently transforming not only mechanical tasks but also cognitive ones. Therefore, this transformation is seen as a major issue for AI ethics (Berente et al., 2021).
Throughout history, unregulated economic exchange systems have found it challenging to manage the negative impacts of emerging innovation; thus, the solution has been to rely on governance (Owen et al., 2013). Given that AI development is a decentralized international matter with a comparatively weak entry barrier (The Stanford Human-Centered AI Initiative, 2018), global regulation seems necessary to guide developers toward the greater good (Tuomi et al., 2020a). However, regulatory responses tend to be slower than the actual pace of technological innovation, with decades of experiencing negative consequences needed to be able to have clear evidence for developing new regulatory frameworks (Owen et al., 2013). As a starting point for AI governance, multiple general-purpose guidelines have been developed to persuade stakeholders. This kind of “soft” governance is usually ignored by some companies in favor of action swiftness and financial gain (Hagendorff, 2020). Even when compliance guidelines are integrated into a larger governance ecosystem, such as policy frameworks, national plans and daily professional practices with different methodologies, they seem to be ineffective (Morley et al., 2021). There is an important gap between ethical principles and practice (Schiff, et al., 2020). Designers and developers of AI systems are struggling to follow these guidelines and, in parallel, innovate with products and services that their markets need and are willing to pay for. Startups, in particular, have more barriers because of the limited resources they have to allocate to tasks that are not directly related to keeping the business alive (e.g. focusing extensively on AI ethics).
In the service ecosystem, AI is being introduced as a solution to enhance service providers’ businesses. The implementation of these technologies brings a natural reconfiguration in the roles that human employees have in service organizations; thus, automating service processes brings about a transformation in service jobs. According to the International Labor Organization (ILO, 2018), jobs are formed by a set of tasks. If some or all tasks of a given service job are automated, some job profiles will be transformed, some eradicated and some created anew (Tuomi and Ascenção, 2021). However, this could threaten the working conditions and opportunities of service employees, jeopardizing the socioeconomic sustainable development. Sharpley (2020) states that sustainable development is associated with the notion of humans’ well-being, which implies having the capabilities to fulfill their potential as individuals and members of society. Thus, in this paper, sustainable social development (SSD) in the service sector is envisioned as an ongoing process that secures decent work and well-being in the labor force by ensuring that any novel technology that might change their current situation does so in a way that enhances their moral, social and emotional conditions, instead of diminishing them (Ulhøi and Nørskov, 2021). In this regard, it is important to highlight the relevance of decent work in the service sector, which is often diminished by labor intensity (Winchenbach et al., 2019) and inadequate employment practices such as low wages or mentally and physically demanding tasks (Tuomi et al., 2020b), e.g. waiters and waitresses working more than 8 h a day earning minimum wage. In theory, AI is proposed as a lifesaver for these kinds of workers, but the business imperative of AI is to take some of their tasks and slowly displace them altogether.
In this article, the focus is not only on having a trustworthy and unbiased AI algorithm but also on the societal impacts of AI on service ecosystems, which are clearly interlinked. This study builds upon the ontology proposed by Johnson and Verdicchio (2017), which invites us to see AI as intrinsically tied to people and society. In other words, AI is not only the algorithm and its designers and developers, but it also includes all decision-makers, regulators and managers related to this socio-technical system, from both developer and customer sides. Literature has been vocal on how designers and developers need to follow AI ethics guidelines, but what about service providers who use the AI systems? Tuomi et al.’s (2020a, 2020b) study revealed that service providers conceptualize intelligent automation primarily as a substitute for employees; however, it also showed that technical limitations impede this substitution. The impact of AI on service jobs depends on the balance between task substitution and enhancement, which will define which tasks are grouped into different job positions (Ivanov, 2020). This means that there will be important ethical decisions to be made by service providers, not only by AI designers and developers.
In this vein, it is important to highlight the relevance of AI startups, which are creating innovative solutions for the service industry. Given their potential to propose scalable and repeatable business models, technology startups are considered a key medium for socioeconomic development and industrial shift through ground-breaking innovations (Passaro et al., 2020). Recent years have seen increased interest and investment in AI startups, whereby the number of startups that focus on AI development has been increasingly growing as measured, e.g. through received funding from private venture capital (VC) firms (Hagendorff, 2020). This is because digital technology development is characterized by low capital costs, making it easier for startups to start new AI businesses (ILO, 2018). In the application area of service robotics, every fifth service robot supplier is a startup (International Federation of Robotics, 2020).
This paper aims to address the following research question:
Whether and how can AI startups influence the sustainable social development of the service sector?
In doing so, the paper makes theoretical and practical contributions to extant literature in three key ways. First, the paper conducts an empirical study to build upon Ethics as a Service approach (Morley et al., 2021), which suggests a new perspective that offers actionable recommendations for practitioners that pursue the ethical development of AI. Second, the paper qualitatively explores key factors that influence AI startups’ ability to contribute to the SSD of the service sector. Third, based on both conceptual and empirical insight gained, novel, service sector-specific guidelines for the ethical development of beneficial AI in the service sector are put forward.
2. Artificial intelligence ethics principles and societal impact
In recent years, several guidelines for developing beneficial AI have emerged from academia, governmental institutes, as well as the private sector. These guidelines vary from abstract guiding principles to more concrete sets of technical guidelines for AI development. However, not all of them include topics related to the societal impact on the workforce. Multiple studies have reviewed and analyzed AI guidelines to identify the main topics and the potential improvements proposed to AI development and application (Schiff et al., 2021). Some of them have identified the lack of content related to cultural sensitivity, sustainability and operationalization (Schiff et al., 2021). Overall, AI ethics guidelines usually suggest high-level principles mainly about privacy and trustworthiness, often missing plans for operationalization. AI ethics guidelines should have a clearer rooting in specific sectors and use contexts for them to be helpful for companies, particularly smaller businesses with limited resources, e.g. startups. For the guidelines to be fully actionable, there is a need to move from general-purpose guidance to more sector-specific governance principles.
2.1 Systematic-contextual way of distributed responsibility
Morley et al. (2021) proposed the concept of Ethics as a Service in an effort to bridge the principles-to-practice gap. The inspiration for this approach comes from platform as a service, from cloud computing, where the cloud supplier offers the main infrastructure, but users are able to develop and design software or applications that fit their needs in a particular context. In this sense, Ethics as a Service frames AI ethics as a collaborative ongoing process that has a solid foundation but naturally adapts to the circumstances. In this view, AI ethics’ outline is adaptable to the context but relies on a systematic application that urges procedures that include justifying design decisions, collaboratively developing and revising common principles, translating them into technical standards and constant monitoring. According to Morley et al. (2021), all of these can be supported by an independent multidisciplinary ethics board as well as individual AI designers and developers. The downside of Ethics as a Service, especially for AI startups, is that all the audits needed to monitor the AI system need resource allocation. Modern technology startups, by definition, are in a constant race to deliver enough results before running out of runway to justify subsequent funding rounds, thus making resource allocation business rather than ethics-driven (Ghezzi, 2020).
2.2 Design methodologies and tools for trustworthy artificial intelligence
This systematic-contextual way of distributing responsibility proposed by Morley et al. (2021) is assisted by methodologies that follow similar holistic principles and aim to go further than normative guidelines to, instead, integrate all stakeholders systematically. Some examples are design thinking, speculative design and codesign. Design thinking, commonly used by software and AI developers, brings multiple methods and frameworks with principles such as putting people first and iterating (Kelly and Stackowiak, 2020). Speculative design is based on speculating about possible futures, designing alternative realities when creating AI-powered solutions by regularly conducting design experiments to prototype infrastructures and test assumptions (Stoimenova and Price, 2020). Speculative design helps to understand relationships between stakeholders, the different values that they cherish and the specific requirement of the socio-technical system in which the potential solution will be applied.
Likewise, codesign as a methodology for designing trustworthy AI is based on the systematic collaboration of AI designers and developers together with a multidisciplinary team of experts and key stakeholders to create awareness of their interdependence while evaluating the ethical, legal and technical implications of the AI system in different scenarios (Zicari et al., 2021). Codesigning AI systems acknowledges that the design process happens within a broader socio-technical context that needs to consider all actors and find ways to minimize the vulnerability related to the short- and long-term use of such systems (Robertson et al., 2019). Having a heterogeneous group evaluating the AI system facilitates the validation of the initial idea presented by AI designers (Zicari et al., 2021). In other words, AI designers might be assuming that there is a latent need coming from the end-users, but when analyzing this claim together with the key stakeholders it is possible to validate assumptions and confront tensions before the AI system is implemented.
3. Method
Following a pragmatic paradigm, this article will rely on field studies with a qualitative approach to explore different stakeholders’ perceptions vis-a-vis the role of AI startups as part of the socially sustainable development of AI in the service ecosystem. Data for this study was collected between April and May 2021, whereby 24 semi-structured interviews were conducted in two overlapping rounds. The first round of interviews started with AI policymakers, AI consultants and academics focusing on ethical AI research and development (n = 12). This was followed by a round of interviews with service sector–focused AI startups at different stages of funding (founders, AI developers; n = 12). The selection of experts to be interviewed was made by using purposive sampling. This type of sampling allows a deliberate search of interviewees with particular characteristics. Participants were found through the professional social media site LinkedIn. Invitation messages were sent to qualifying participants with information about the research and the required time for the interview. The interviews lasted 30–60 min. Interviewees were asked about the principles-to-practice gap, ethical decision-making, the role of AI startups in contributing to the SSD of the service sector, their impact on the job transformation, decent and meaningful work, all in the context of the service sector. The discussion was based on the research question and guided by an interview protocol, whereby the overall narrative of the interviews stayed similar throughout, but each interview was slightly adjusted depending on the interviewee’s background and expertise.
All the interviews were recorded, automatically transcribed and manually anonymized. Data was collected until saturation was reached. After the transcription, inductive coding (first-order concepts, second-order themes leading to aggregated dimensions) was used to identify patterns, determine categories and analyze and interpret them (Gioia et al., 2013). Analysis was led by one researcher and checked by another, and theoretical saturation was achieved when no new categories, dimensions or relationships appeared during data analysis.
4. Findings and discussion
The first part of this section will assume a macro-perspective by considering the role of other stakeholders in relation to AI startups, aiming to illustrate the complex system in which all actors have different contributions for the ethical deployment of AI in the service ecosystem.
4.1 Artificial intelligence startups as part of a complex system
When asked about AI startups’ role as part of service sector’s SSD, almost all participants mentioned other actors’ roles and their interrelationships. Thus, findings illustrate that AI startups’ responsibilities need to be understood and acted in relation to other stakeholders. In other words, to contribute to the SSD of the service sector, AI startups need to take responsibility internally, but they also need support externally from different actors to ensure that everybody is playing their role accordingly.
This external support is based on collaborative efforts; however, some respondents argued that it is difficult to collaborate across stakeholder borders because of the different vested interests and ensuing politics. Indeed, there are some decisions that depend on the bargaining power of service providers. One participant (P6) exemplified this problem by mentioning that an AI startup might encounter some obstacles to automate services if a more socialist government puts pressure on, e.g. hotels for automating tasks, asking them to create a compensation fund for unemployment. In such a scenario, hotels will try to negotiate with the government, unions and the AI startup to consider retraining the staff. The respondent mentioned that there would be some pressure from the consumers as well because hotel guests might be willing to pay more to have a human checking-in at midnight. Hence, the role AI startups play in actioning SSD within the service sector is intertwined with the socio-political and economic system within which the startup operates.
Out of the 24 participants, 10 emphasized governments’ role in this matter. For some respondents, governments should take full responsibility for the job market and the future of work in the service sector. Some policymakers support the idea that innovation cannot be completely regulated. As put by one participant:
Government and regulators theoretically should help to reduce those risks, but particularly with tech innovation, government tends to lag a few years behind, with solid damages kind of already happening by the time it gets regulated. For a variety of reasons, one of which is the government doesn't really understand technology (P10).
4.2 Artificial intelligence startups framing other stakeholder’s responsibilities
Overall, AI startups do not feel fully accountable for the deployment of AI and its impact on society. However, even when their responsibility reach appears to be very limited, some participants could imagine different scenarios in which other stakeholders’ actions could positively impact their own role in the ethical development of AI. For example, investors play an important role in supporting AI startups in pursuing ethical practices. As illustrated by one participant:
Tech startups don’t have the resources, but you as an incubator or you as an accelerator are bigger than them, so you have the resources, why don’t you make this available to the startups that are within your system (P5).
The respondent also added that investment decisions should include ethics consulting or impact assessments to better facilitate the conversation about ethics. In the end, if venture capitalists or angel investors make ethics a key part of the startup funding cycle, AI startups have to pay attention. One interviewee mentioned that investors should be motivated to fund ethical practices because it also protects them from risks that might damage the venture further down the line. However, currently, there are no incentives for investors to ask for ethical practices or impact assessments.
Another influential actor is academia. One of the respondents shared their experience of working together with university professors supporting their AI project. They took part in a university-led accelerator program, which set a foundation for success and facilitated a continuous flow of information between business and academia. Likewise, some participants mentioned social impact startups, whose primary role is to innovate with solutions for unemployment, promoting social, economic and cultural benefits. For example, an interviewee mentioned that these kinds of startups could develop tools that contribute to decent work by matching people to new jobs, finding creative ways to help people find a job. If AI startups are creating a problem, then other startups should take solving the problem as a new business opportunity.
Some mentioned that only the government should be responsible for the service sector’s SSD amidst the job transformation. Participants frequently mentioned the concept of creative destruction, which entails that AI will bring new demands that will keep the economy alive. Concerning service providers, one participant mentioned that the government should incentivize them to choose AI startups with demonstrable CSR practices. The respondent stressed that service providers should receive a public fund from the government to motivate them to hire companies with ethical solutions. Additionally, some participants shared their perspectives about who should be accountable for reskilling and upskilling the service sector workforce. While some think that it is the governments’ role, others believe it is the service provider who should take care of this matter.
Indeed, findings show that service providers’ role is to ease technology adoption, change management and decision-making with regards to the level of automation and the transformation of the job profiles. First, some respondents mentioned that if service providers such as restaurants or hotels are looking for cost reduction, then AI startups might try to get to the level of AI that replaces humans eventually. Tuomi et al.’s (2020a, 2020b) studies revealed similar findings, showing that the degree to which service operations were automated appears to be dependent on the established business model of the service provider. Thus, it is relevant to consider the service providers’ priorities when assessing the unexpected outcomes of AI. Second, results show that service providers’ role is to redefine job profiles. This comprises re-grouping service tasks around new job profiles and deciding whether it is more profitable to hire new workers or re-skill the current ones for these new jobs (ILO, 2018). Thus, service providers need to proactively balance task substitution and task enhancement to define new job positions (Ivanov, 2020).
Finally, findings show that service users, also called customers or consumers, also have a role to play in the SSD of the service sector’s job transformation. The findings revealed that consumers should proactively ask for ethical practices from AI companies, including startups. Results indicate that usually AI startup founders have no interest in ethics, but if the customer asks for an ethical approach, they might focus on it. Thus, if the main objective of an AI startup is to find product-market fit, and the market is asking for ethical practices, the startups have no choice but to follow the trend.
4.3 Factors influencing artificial intelligence startups’ ability to contribute to the sustainable social development of service sector
The second part will show AI startups’ four key factors for contributing to the SSD of the service sector’s job transformation:
awareness of socioeconomic issues;
fostering decent work;
systematically applying ethics; and
business model innovation.
The order of appearance of these factors is random; each one is as relevant as the others. Furthermore, this part showcases key barriers AI startups face and proposes practical guidance for startups to address these.
4.3.1 Awareness of socioeconomic issues.
One of AI startups’ key influence factors is being aware of the overall impact of AI solutions on the socioeconomic sphere of the service sector. This means that they should be mindful of the possible consequences that their solutions bring to society.
Some participants suggested that AI startups can improve their awareness by implementing impact assessments. The analysis made for this kind of assessment can help them identify all the stakeholders impacted by the developed AI system, along with the implications of the system on their well-being in the product ideation and development phase before its commercialization (Schiff, et al., 2020). However, AI startup members confirmed that they do not have enough resources to conduct impact assessments or any other methods to understand how sustainable their solution is. Out of the five cofounders, three expressed that if resources are limited, the priority will not be sustainability, but rather, economic incentives (Ghezzi, 2020).
4.3.2 Fostering decent work.
The next AI startups’ key influence factor is to encourage decent work with their products and services. This means that AI systems should promote well-being for the service workforce by creating more meaningful service tasks for humans while enhancing their capabilities. Service task enhancement can ameliorate the working conditions, helping to lower turnover rates, which is particularly high in the service sector. Out of the 24 interviewees, 9 mentioned that if service employees’ capabilities are augmented, they will spend less time than usual on their activities. This helps to confirm that service providers will need to decide whether to replace employees with AI by analyzing the task portfolio of a job and optimizing task allocation between human employees and AI (Ivanov, 2020; Tuomi and Ascenção, 2021). Most participants think that employees will be doing more meaningful tasks in the same position instead of being laid off. Some of them stated that intelligent automation systems are not designed to substitute humans in the first place, paired with the fact that humans will always be needed to train AI systems. Hence, results show that AI startup members consider humans as a critical element for actioning AI in practice.
Out of the three AI ethics consultants, two saw deciding which jobs are meaningful as an ethical judgment, a decision AI startups currently make without much oversight. If they think that a job is not meaningful, it would not be wrong to automate it. Certainly, most AI startups showed a strong opinion on which tasks should be automated: the routinized and mundane, which illustrates what the AI ethics consultants say about AI startups deciding what to substitute with intelligent automation. They emphasized that if it is repetitive and takes a lot of time, people will not consider the task meaningful. With a collaborative approach, AI startups could engage with the end-users of their solutions and as a result design more meaningful service jobs.
Overall, the main barriers for AI startups to provide decent work in the service sector are technology adoption, change management and the conditions of deployment. Despite AI startups developing a product with specific features, the way it is implemented might not necessarily promote decent work. The findings showed that change management is also relevant to fostering decent work in service. The problem is that some service employees might not want their routine and monotonous tasks to be automated. Certainly, the human resources management of service providers plays a key role in job transformation. Thus, AI startups’ role in fostering decent work can be hampered by how technology is implemented. In the context of service robot application in restaurants, Tuomi et al. (2021) referred to this as establishing the desired “robot-job fit” collaboratively with impacted stakeholders. Indeed, to truly encourage decent work in the service sector, AI startups might need to collaborate with service employees, customers, AI ethics consultants, service providers and governments to find solutions for change management, technology adoption and philosophical topics such as what is meant by “meaningful jobs.” AI startups should more proactively collaborate to be involved in the implementation of AI technology, not just the design and development of it. A way to do this might be to integrate the AI startup team into the real environment of the service provider and support these immersion. This way, they could have firsthand experience that can help make more informed decision about which service tasks to automate and thus justifying design decisions and collaboratively developing and revising common principles as in Ethics as a Service approach (Morley et al., 2021).
4.3.3 Systematically applying artificial intelligence ethics.
Another key influence factor for startups contribution to the service sector’s SSD is having a system for applying AI ethics, which implies a structured way of managing ethical practices with constant monitoring. For instance, AI startups should integrate formal ethical due diligence into their business processes. Findings show that a system for applying AI ethics should follow ethical standards throughout all stages. Ethical constraints could be identified in the planning process; however, startups tend not to encourage formal planning but iterative experimentation. Some AI ethics experts commented that AI startups follow a fast-paced innovation process in which limited resources prevent them from focusing on ethics-related subjects. The results confirmed that having more robust AI and driving profit is often more important than acting ethically (Hagendorff, 2020). Interviewees were in consensus that AI startups’ priority is to reach product-market fit and focus on how to get more investment.
Indeed, some AI startup members seemed skeptical about AI ethics guidelines. Some perceive that their AI solutions are not compromising the well-being of human beings; thus, no ethical guidelines are needed. As put by one participant:
[…] Customer service is not like life and death decisions […] as long as it solves, it makes their life easier and as long as there is acceptance, then there’s no need to talk about ethics […] (P15).
Given that AI startups are not mindful of the overall impact of their solutions, they believe that there is no need to spend time and resources on ethical issues.
As for empowerment in decision-making, AI systems designers might not be empowered to make decisions related to ethics, because it is believed that only the middle and senior management should be thinking about social sustainability. Findings show that, even at a microlevel, it is complex to be involved with stakeholders’ decision-making process. Some AI startups acknowledged that there is always a middle person in between systems developers and the service providers. This person will gather requirements and then take them to the developer. Thus, there is no interaction with the designers and the end-users. If the AI startup team is not involved in the service environment, they cannot take decisions that could hinder or benefit the SSD of the service sector.
4.3.4 Business model innovation that supports sustainable social development.
The last key influence factor for startups’ contribution to the service sector’s SSD is business model innovation. One participant, an expert in AI policy (P6), expressed that AI startups should develop business models that consider unemployed people, e.g. by introducing re-training programs to make sure they can find another job. For instance, a cofounder (P13) mentioned that they hire people who have worked in call centers because they have the expertise and are valuable for training the AI systems this company develops, which may be seen as more meaningful job than answering calls. This way, their expertise improves the business and supports the professional development of people whose job might eventually be eliminated.
To illustrate, an AI ethics consultant (P9) proposed a way to support the service sector’s SSD through AI startups’ business model by including funds for education. Interestingly, another interviewee mentioned that they implemented this initiative in their AI startup:
[…] one of the initiatives that we did was try to invest a little bit of what we gained from these activities so that somebody could get an education, some of the people that maybe lost their job […] so they could transform their expertise (P18).
Including key activities related to ethics within AI startups’ business models could be a way to structure the system around ethical practices. One participant (P5) mentioned that having this structure would be a competitive advantage because both investors and clients could see the ethical standards, policies and compliance metrics, which would in turn help each stakeholder earn trust in a way that competitors cannot. Additionally, AI startups can benefit from supporting SSD in their business models because it helps attract talent. One expert in AI ethics (P8) stated that it is particularly important to have a good brand image so that people want to work at an AI company. However, the respondent mentioned that AI startups usually do not have this in mind.
Even though there might be a market for ethical practices, some AI startup members consider ethical issues challenging because they often lack tangible results. Therefore, AI startups should consider social impact from the conceptualization of the key business model elements to be able to demonstrate how social impact can create revenue. However, some AI startup members mentioned that it is unessential, and it usually does not add value to the investors, who will ultimately be asking for return on investment for this activity.
In general, AI startup members mentioned that being socially responsible could not be the main value proposition. Instead, the main advantage of AI is to have more effective and cheap processes. Thus, it is complex to optimize processes without using fewer people. As illustrated by one participant:
Everybody is obsessed with cost-cutting, and most of the costs come from employing human resources. So how do you balance the introduction of machines? But also make sure that it’s a thriving industry that employs people, not replaces them (P24).
Overall, the results illustrate that all stakeholders have their roles and vested interests within the service sector’s job transformation, which causes conflicting trade-offs that should be collectively considered when looking for sustainable development in the sector (Sharpley, 2020). Therefore, promoting AI ethics that eventually lead to SSD will be an ever-evolving process (Morley et al., 2021) in which AI startups need to collaborate with other actors to promote collective well-being for the entire service ecosystem. Building upon Ethics as a Service (Morley et al., 2021), findings showed that some aspects of this protocol could be problematic for startups. Certainly, a systematic approach could fail in these kinds of organizations given the lack of resources and the fast-paced innovation, coupled with the lack of involvement and awareness within the service providers’ broader operational environment and societal issues. Limited funds also compromise the support of an independent multidisciplinary ethics board. This could be managed from the perspective of investors. Table 1 summarizes our key findings.
The service sector-specific guidelines’ application needs to be supported by the key stakeholders, nonetheless the main acting force comes from AI startups, whose role is to be proactive. As an example, take the case of embodied AI systems, i.e. service robots that seek to liberate waiters and waitresses from taking guests’ orders and bringing food to the table (Tuomi et al., 2020a). This solution seems positive at the outset, however, by following an inclusive design methodology, such as codesign, AI designers and developers may engage in participatory observation and conversation with the waiters and waitresses to understand what their main struggles are and how such robots can help. It is an ongoing process of iteration and observation, whereby the ultimate value proposition is arrived at together. After using the robots for the proposed use case, the service employees will realize that they are in no need to memorize the menu nor chat with the customers as before, thus, their emotional, moral and social skills are being jeopardized (Tuomi et al., 2021). By implementing monthly surveys to employees, the AI startup team may detect this issue and meet with the restaurant management, employees and possibly a recurrent client to understand how the robot can be used to benefit the employees but at the same time offer advantages for the business. As a result, the discussion will provoke a change in the AI system and how it is implemented in the service setting; e.g. the robots will now help to detect when a client is looking for a free server, and they will only help to bring food to the table, but humans will take the orders. Empirical studies and ongoing monitoring are needed to understand how AI startups and other stakeholders can play an active role in applying the guidelines developed herewith.
5. Conclusion, implications and future research
5.1 Conclusion
The service sector is being transformed by AI, changing the dynamics of value creation (Ivanov, 2020). Recognizing the scale of the ongoing transformation, different actors have made efforts to develop guidelines for the ethical application of emerging technology. However, to date, such guidelines have tended to be ineffective. New approaches such as Ethics as a Service (Morley et al., 2021) are trying to bridge the principles-to-practice gap. Nonetheless, there are some constraints for AI startups. Findings showed that a systematic approach could fail given the lack of resources and the fast-paced innovation, paired with the lack of involvement and awareness about the service providers’ broader operational environment and societal issues.
AI startups play an important role in bringing about a socially sustainable transformation in the service sector; however, given the lack of sector specific guidelines, AI startups may find it difficult to understand and appreciate their role within the SSD of the service ecosystem. To that end, drawing our empirical findings together, we develop the following guidelines for AI startups operating in the service sector to ensure the ethical development of AI which supports the SSD of service:
Do not underestimate your AI solutions, implement basic impact assessments.
Proactively engage with service providers to augment service employees’ capabilities.
Work as a multi-stakeholder-team and use inclusive design methodologies.
Involve the team in the real service setting.
Reframe social sustainability in the business model to make it attractive for investors.
Findings show that VC firms and investors play a key role in motivating AI startups to follow a sustainable direction in new product and service development. As part of the capitalist system, there is therefore a need to incentivize AI startups to follow AI ethics and invest resources, even if very limited, to the application of AI ethics’ principles. Thus, our recommendation is to enforce these guidelines as part of startup incubators and accelerator programs and highlight that sustainability may be reframed in the business model to make it attractive for investors. For example, AI startups applying to incubators, accelerators or when raising seed and subsequent funding should be required to demonstrate how AI ethics principles are considered and embedded into business model and operational practice of the company, even as the business eventually scales up. However, as illustrated by this study, this is not yet the reality, and as such, future research is needed.
5.2 Limitations and future research
It was possible to triangulate by collecting qualitative data from a diverse set of stakeholders across two key geographical areas of AI development, Europe and Northern America. However, the study overlooks the developing countries viewpoint. Future research should extend our approach to include underrepresented geographical areas. Further, future research should also focus on exploring more particular contextual settings of the service sector, e.g. finance, retail, or specific stakeholders that could bring new insights to the understanding of the phenomena, like VC firms who could enforce ethics guidelines when making funding decisions. Likewise, future research could focus on a specific section of the startup ecosystem to give a comprehensive overview of different factors that influence considerations for social sustainability throughout the startup’s lifecycle.
Summary of key findings and proposed service sector-specific guidelines
Key findings related to what hinders AI startup’s role in the SSD of the service sector | Proposed service sector-specific guidelines for the ethical development of beneficial AI | Key factors influencing AI startups’ ability to contribute to the SSD of the service sector |
---|---|---|
No apparent need and system for AI ethics • AI startup members do not have enough resources to conduct impact assessments • Their entrepreneurial methodologies do not encourage formal planning, which prevents the identification of ethical pitfalls • Having more robust AI and driving profit is often more important than acting • When AI solutions are not compromising the well-being of human beings, e.g. customer service AI chatbots, no ethical guidelines are needed |
Do not underestimate your AI solutions, implement basic impact assessments • Despite how unharmful the AI system may seem, assessments must be done from the early stages of the project and periodically during the implementation to identify all the stakeholders and the implications of the AI system on decent and meaningful jobs • This basic assessment would not need funds; it can look as simple as a feedback survey or quarterly meetings to assess the implementation. This entails audit processes to assess not only the impact but the algorithms and data • The aim is to apply such assessments systematically to secure constant monitoring as well as to register and justify all decisions made |
Awareness of socioeconomic issues |
Issues with technology adoption and change management in service settings • Despite AI startups developing a product with specific features, the way it is implemented might not necessarily promote decent work • Change management is relevant to fostering decent work in the service sector. Service employees might not want their routine and monotonous tasks to be automated • AI startups’ role in fostering decent work can be hampered by how technology is implemented in practice |
Proactively engage with service providers to augment service employees’ capabilities • AI startups should be involved in the implementation of AI technology, not just the design and development of it • Collaborate with the service providers to improve the task portfolio of service jobs, i.e. promote less physically and mentally demanding tasks • Build AI solutions that do not automate the tasks that are identified as enhancers of meaningful jobs for service employees; for this, it is necessary to empathize with them. Consider reducing labor intensity and inadequate employment practices or poor working conditions • For all this, it is imperative to have a committed collaboration with the AI startups’ clients, which are the service providers. Collaboration is key |
Fostering decent work |
Assuming which service tasks are best to automate without engaging with all stakeholders • AI startups showed a strong opinion on which tasks should be automated: the routinized and mundane. They emphasized that if it is repetitive and takes a lot of time, people will not consider the task meaningful • AI systems designers might not be empowered to make decisions related to ethics, because it is believed that only the middle and senior management should be thinking about social sustainability • AI startups need to take responsibility internally, but they also need support externally from different actors to ensure that everybody is playing their role accordingly |
Work as a multistakeholder team and use inclusive design methodologies • AI startup members must design the AI systems and decide which tasks to substitute by using methodologies such as codesign, speculative design or design thinking • Using such methodologies will ensure the consideration of all stakeholders throughout the startup’s life cycle, from the advent of the first value proposition to scale-up and potential exit • With a cross-disciplinary collaborative approach, AI startups can engage with end-users and indirect users of their solutions and, as a result, design more meaningful service jobs • AI startups can collaborate with research partners to complement their expertise, e.g. with researchers related to service management and human resources |
Systematically applying ethics |
Involve the team in the real service setting • Promote the immersion of the AI startup team into the real environment of the service provider to have a firsthand experience of how the service is carried out. This could create more empathy and understanding of what the real struggles are, leading to creating more meaningful jobs and more successful solutions to real problems • Support this immersion with design methodologies |
||
AI startups’ focus is on how to get more investment • AI startup members consider sustainability issues challenging because they often lack tangible results • Being socially responsible cannot be the main value proposition. Instead, the main advantage of AI is to have more effective and cheap processes |
Reframe sustainability in the business model to make it attractive for investors • Find the best way to tackle social impact issues in the business model innovation process by being aware of all the benefits that it brings for the startup’s development; e.g. proposing sustainable solutions is a competitive advantage because both investors and clients could see the ethical standards, policies and compliance metrics, which would in turn help each stakeholder earn trust in a way that competitors cannot • Consider social impact from the ideation and conceptualization of the key business model elements to be able to demonstrate how social impact can create revenue |
Business model innovation |
References
Berente, N., Gu, B., Recker, J. and Santhanam, R. (2021), “Managing artificial intelligence”, Mis Q, Vol. 45 No. 3, pp. 1433-1450.
Ghezzi, A. (2020), “How entrepreneurs make sense of lean startup approaches: business models as cognitive lenses to generate fast and frugal heuristics”, Technological Forecasting and Social Change, Vol. 161, p. 120324, doi: 10.1016/j.techfore.2020.120324.
Gioia, D.A., Corley, K.G. and Hamilton, A.L. (2013), “Seeking qualitative rigor in inductive research: notes on the Gioia methodology”, Organizational research methods, Vol. 16 No. 1, pp. 15-31.
Hagendorff, T. (2020), “The ethics of AI ethics: an evaluation of guidelines”, Minds and Machines, Vol. 30, pp. 99-120.
International Federation of Robotics (2020), “IFR press conference”, 24th September 2020, Frankfurt.
International Labour Organization (ILO) (2018), “The economics of artificial intelligence: implications for the future of work”, available at: www.ilo.org/global/topics/future-of-work/publications/research-papers/WCMS_647306/lang–en/index.htm
Ivanov, S. (2020), “The impact of automation on tourism and hospitality jobs”, Information Technology and Tourism, Vol. 22 No. 2, pp. 205-215.
Johnson, D.G. and Verdicchio, M. (2017), “Reframing AI discourse”, Minds and Machines, Vol. 27 No. 4, pp. 575-590.
Kelly, T. and Stackowiak, R. (2020), Design Thinking in Software and AI Projects: Proving Ideas through Rapid Prototyping, 1 2020 ed., Apress.
Morley, J., Elhalal, A., Garcia, F., Kinsey, L., Mökander, J. and Floridi, L. (2021), “Ethics as a service: a pragmatic operationalisation of AI ethics”, Minds and Machines, Vol. 31 No. 2, pp. 239-256.
Murray, A., Rhymer, J. and Sirmon, D.G. (2021), “Humans and technology: forms of conjoined agency in organizations”, Academy of Management Review, Vol. 46 No. 3, pp. 552-571.
Owen, R., Stilgoe, J., Macnaghten, P., Gorman, M., Fisher, E. and Guston, D. (2013), “A framework for responsible innovation”, in Owen, R., Bessant, J.R. and Heintz, M. (Eds.), Responsible Innovation: Managing the Responsible Emergence of Science and Innovation in Society, ProQuest Ebook Central.
Passaro, R., Quinto, I., Rippa, P. and Thomas, A. (2020), “Evolution of collaborative networks supporting startup sustainability: evidences from digital firms”, Sustainability, (Basel, Switzerland), Vol. 12 No. 22, p. 9437.
Robertson, L.J., Abbas, R., Alici, G., Munoz, A. and Michael, K. (2019), “Engineering-based design methodology for embedding ethics in autonomous robots”, Proceedings of the IEEE, Vol. 107 No. 3, pp. 582-599.
Schiff, D., Borenstein, J., Biddle, J. and Laas, K. (2021), “AI ethics in the public, private, and NGO sectors: a review of a global document collection”, IEEE Transactions on Technology and Society, Vol. 2 No. 1, pp. 31-42.
Schiff, D., Rakova, B., Ayesh, A., Fanti, A. and Lennon, M. (2020), “Principles to practices for responsible AI: closing the gap”, arXiv:2006.04707.
Sharpley, R. (2020), “Tourism, sustainable development and the theoretical divide: 20 years on”, Journal of Sustainable Tourism, Vol. 28 No. 11, pp. 1932-1946, doi: 10.1080/09669582.2020.1779732.
Stoimenova, N. and Price, R. (2020), “Exploring the nuances of designing (with/for) artificial intelligence”, Design Issues, Vol. 36 No. 4, pp. 45-55.
The Stanford Human-Centered AI Initiative (HAI) (2018), “Introducing Stanford’s human-centered AI initiative”, available at: https://hai.stanford.edu/blog/introducing-stanfords-human-centered-ai-initiative
Tuomi, A. and Ascenção, M.P. (2021), “Intelligent automation in hospitality: exploring the relative automatability of frontline food service tasks”, Journal of Hospitality and Tourism Insights, doi: 10.1108/JHTI-07-2021-0175.
Tuomi, A., Tussyadiah, I. and Hanna, P. (2021), “Spicing up hospitality service encounters: the case of Pepper™”, International Journal of Contemporary Hospitality Management, Vol. 33 No. 11, pp. 3906-3925, doi: 10.1108/IJCHM-07-2020-0739.
Tuomi, A., Tussyadiah, I.P. and Stienmetz, J. (2020a), “Applications and implications of service robots in hospitality”, Cornell Hospitality Quarterly, Vol. 62 No. 2, doi: 10.1177/1938965520923961.
Tuomi, A., Tussyadiah, I., Ling, E., Miller, G. and Lee, G. (2020b), “x=(tourism_work) y=(sdg8) while y=true: automate(x)”, Annals of Tourism Research, Vol. 84, p. 102978.
Tussyadiah, I. (2020), “A review of research into automation in tourism: launching the Annals of Tourism Research Curated Collection on Artificial Intelligence and Robotics in Tourism”, Annals of Tourism Research, Vol. 81, p. 102883, doi: 10.1016/j.annals.2020.102883.
Ulhøi, J.P. and Nørskov, S. (2021), “Extending the conceptualization of performability with cultural sustainability: the case of social robotics”, in Handbook of Advanced Performability Engineering, Springer, Cham, pp. 89-104.
Winchenbach, A., Hanna, P. and Miller, G. (2019), “Rethinking decent work: the value of dignity in tourism employment”, Journal of Sustainable Tourism, Vol. 27 No. 7, pp. 1026-1043, doi: 10.1080/09669582.2019.1566346.
Zicari, R.V., Ahmed, S., Amann, J., Braun, S.A. and Brodersen, J. (2021), “Co-design of a trustworthy AI system in healthcare: deep learning based skin lesion classifier”, Frontiers in Human Dynamics, Vol. 3, p. 40.
Further reading
Tussyadiah, I., Tuomi, A., Ling, E., Miller, G. and Lee, G. (2022), “Drivers of organizational adoption of automation”, Annals of Tourism Research, Vol. 93, p. 103308, doi: 10.1016/j.annals.2021.103308.