Limits of artificial intelligence in controlling and the ways forward: a call for future accounting research

Heimo Losbichler (Controlling, Accounting and Financial Management, School of Business and Management, University of Applied Sciences Upper Austria, Steyr, Austria)
Othmar M. Lehner (Hanken School of Economics, Helsinki, Finland) (Controlling, Accounting and Financial Management, School of Business and Management, University of Applied Sciences Upper Austria, Steyr, Austria)

Journal of Applied Accounting Research

ISSN: 0967-5426

Article publication date: 13 January 2021

Issue publication date: 23 February 2021

19231

Abstract

Purpose

Looking at the limits of artificial intelligence (AI) and controlling based on complexity and system-theoretical deliberations, the authors aimed to derive a future outlook of the possible applications and provide insights into a future complementary of human–machine information processing. Derived from these examples, the authors propose a research agenda in five areas to further the field.

Design/methodology/approach

This article is conceptual in its nature, yet a theoretically informed semi-systematic literature review from various disciplines together with empirically validated future research questions provides the background of the overall narration.

Findings

AI is found to be severely limited in its application to controlling and is discussed from the perspectives of complexity and cybernetics. A total of three such limits, namely the Bremermann limit, the problems with a partial detectability and controllability of complex systems and the inherent biases in the complementarity of human and machine information processing, are presented as salient and representative examples. The authors then go on and carefully illustrate how a human–machine collaboration could look like depending on the specifics of the task and the environment. With this, the authors propose different angles on future research that could revolutionise the application of AI in accounting leadership.

Research limitations/implications

Future research on the value promises of AI in controlling needs to take into account physical and computational effects and may embrace a complexity lens.

Practical implications

AI may have severe limits in its application for accounting and controlling because of the vast amount of information in complex systems.

Originality/value

The research agenda consists of five areas that are derived from the previous discussion. These areas are as follows: organisational transformation, human–machine collaboration, regulation, technological innovation and ethical considerations. For each of these areas, the research questions, potential theoretical underpinnings as well as methodological considerations are provided.

Keywords

Citation

Losbichler, H. and Lehner, O.M. (2021), "Limits of artificial intelligence in controlling and the ways forward: a call for future accounting research", Journal of Applied Accounting Research, Vol. 22 No. 2, pp. 365-382. https://doi.org/10.1108/JAAR-10-2020-0207

Publisher

:

Emerald Publishing Limited

Copyright © Heimo Losbichler and Othmar M. Lehner

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


Introduction: a paradigm shift in planning, budgeting and forecasting?

The influence of digitisation is directed at two very different areas of management accounting and monitoring (summarised as controlling henceforth). On the one hand is the automation of repetitive routine activities (robotic process automation) and on the other hand is the support or automation of demanding analytical activities (such as machine forecasts and artificial intelligence [AI]). While the automation of routine activities, particularly in large companies, is progressing successfully, the support of analytical activities seems to be considerably more difficult. According to a study by the German Federal Ministry of Economics, only 5% of German companies currently use AI in one of their divisions (Feser, 2020). The percentage of companies using AI in controlling is therefore negligible. At the same time, there are great expectations from the AI systems used in controlling (Seufert and Treitz, 2019). This article examines both the limits of the forecasting capabilities and the possible applications of the automated forecasts and provides a derived research agenda for our field.

The complaints about an uncertain and difficult to plan environment, the premature “being outdated” of planning and the budgetary “power games” have a long history. At the beginning of the 2000s, the Beyond Budgeting Round Table (BBRT) loudly called for an end to classical planning. In the course of the 2008 financial crisis, the term VUCA, which stands for volatility, uncertainty, complexity and ambiguity, became established as a synonym for the problem of the predictability of future developments (Bennett and Lemoine, 2014). In response to the then “new normal”, concepts such as modern budgeting, scenario planning, bandwidth planning and rolling forecasts were presented, which in various ways propagated the abandonment of detailed, precise planning and forecasting (Lepori and Montauti, 2020). With the advent of digitisation, however, a paradigm shift seems to have begun. Access to new data sources (big data), almost unlimited computing power and AI systems has quickly led to keywords such as predictive analytics and the first applications of AI-based machine forecasts (Batistič and der Laken, 2019; Brands and Holtzblatt, 2015; Earley, 2015; Mikalef et al., 2019; Qasim and Kharbat, 2019). This revived the belief in the predictability of the future (see Figure 1), at least until the outbreak of the corona crisis. The few field reports from predominantly large corporations seem to confirm the possibility of predictability through AI and the superiority of machine forecasts.

The differences between human and machine forecasting can be plausibly explained by the complementarity of human and machine information processing (Harris and Wang, 2019; Hofmann and Rothenberg, 2019). However, despite positive examples from experience, a realistic expectation is appropriate with regard to the forecast accuracy of machine planning and forecasting as there are limits to the ascertainability and planning capability of AI in a VUCA environment (Caglio, 2003; Warner and Wäger, 2019). The above-mentioned limits shall now be discussed from the point of view of complexity and cybernetics in the next few sections before we move on to illustrate how a human–machine collaboration can look like and what this would mean for future research by providing an empirically validated research agenda.

Limits of predictability from the perspective of complexity and cybernetics

Dealing with complexity is considered one of the greatest challenges in management today (Falschlunger et al., 2016; Reeves et al., 2020). Managers have to take into account an ever-increasing number of factors in corporate management, which are also changing ever more rapidly and are highly interlinked. The main drivers of this development are globalisation and paradoxically – despite the salvatory potential of it – the rapid progress of digitisation, which means networking the world in real time and increasing the speed of change. Cybernetics, in particular, has taken on the task of dealing with complexity. Pioneers such as Ashby, Beer, Forrester, Luhmann, Ulrich, Probst, Gomez, Malik, Dörner and Vester created elementary foundations for this a long time ago (Luhman and Boje, 2001; Oll et al., 2016; Reeves et al., 2020), which are now more topical than ever before with regard to the limits of AI (Dwivedi et al., 2019). Exemplarily, the Bremermann limit (Bremermann, 1963; Frederick Malik, 1984) and the partial detectability and controllability of complex systems (Luhman and Boje, 2001; Zelinka et al., 2014) are further highlighted in this article.

Bremermann's limit

In accordance with Bremermann's limit, human knowledge has been set an insurmountable, absolute limit, which cannot be removed even by the greatest progress in digitisation. Because of the atomic nature of the matter, there is an upper limit to information processing that cannot be exceeded by any computer or brain consisting of matter with a mass M and the maximum speed of light c: in other words, no system consisting of matter can process more than ∼ 2 × 1047 bits/per second per gram (Bremermann, 1962, 1982) and by further including general relativity effects, the gravitational as well as Plank's constant, there is even an absolute limit of ∼1043 bits/per second proposed, irrespective of the mass (Gorelik, 2009). As a consequence, even the most powerful cloud-based computer clusters, such as Hadoop (Zikopoulos and Eaton, 2011) might not ever have the necessary computing power for completely accurate forecasts in today's complex competitive environment, and Moore's law of doubling processing power ∼ every two years cannot be projected ad infinitum because of the stated physical limits of information processing (Gatherer, 2007). Malik made an interesting comparison in his habilitation thesis (see Malik, 2000), in which he determined the theoretical limit of the information processing capacity under the assumption that the entire mass of the earth since the beginning of the earth's history would be a gigantic computer permanently processing information. He contrasted this information processing capacity with the complexity of the typical decision-making situations in management, showing the limited ability to make predictions (Fredmund Malik, 2000).

Partial detectability and controllability of complex systems

Figure 2 shows the structural makeup of complex systems such as our current economic system. They consist of a multitude of elements (Reeves et al., 2020) (a to h) and relationships (arrows between the elements), whereby the system breaks down into a part (a, b, d, e, g or h) visible to the actuator A (manager or controller) and an invisible part (c or f). An example of an invisible element would be the corona virus before its outbreak. This has a significant consequence: we do not know that certain elements exist and hence cannot take them into account while making decisions. The system is therefore only partially detectable and can only be modelled incompletely in AI systems.

Furthermore, complex systems are divided into active elements (b and d), which change independently, and passive elements (a, c, e, f, h and g). Because of the active elements, complex systems have their own dynamics. They do not wait for the intervention of the actuator but change independently. Both the elements themselves and the relationships between the elements can change without any intervention. Consequently, the input (management interventions) no longer determines the output alone. Rather, the output is dependent on the input and the states of the system. Therefore, it constantly surprises us with its behaviour. Forrester (1974) described this as contrary to intuition because the known phenomena suddenly behave differently from what we expect on the basis of experience (Dörner et al., 1983). This also applies to machine forecasts based on AI, which should ultimately be able to accurately predict the future on the basis of the past data (states of the system). The intrinsic dynamics of complex systems taking Bremermann's limit into account has profound consequences: the ideal of exact prediction becomes impossible. Rather, we must be content with patterns.

Finally, managers in complex systems have only limited control options. To achieve the goals, the actuator must change the state of certain elements. For the actuator, the elements of the system break down into elements that can be influenced directly (dotted lines from the actuator to the elements a, d and g), indirectly influenced (b, e and h) or not influenced (c and f). In addition, the isolated influence of the elements is difficult because they are highly interconnected, and the actuator is influenced by the elements themselves (dashed lines from the elements a, e and h to the actuator). This results in a limited control possibility in addition to the limited prognosis possibility.

In summary, it can be deduced from these two areas that the ideal of exact forecasts from a cybernetic and systems theory perspective remains an unattainable ideal even in the age of AI and machine forecasts. This is not to say, however, that machine forecasts cannot bring about improvements in controlling. On the one hand, the same result can be achieved by automation with less effort, and on the other hand, an improvement in quality can be achieved through the complementarity of human and machine information processing.

Complementarity of human and machine information processing

The question why machine forecasts might be superior to human forecasts can be answered primarily from the perspective of human rationality deficits. The performance or the limitations of the human brain in information reception and processing can be summarised as follows (see also (Haefner, 2000)):

  1. People can only use information that they have learned or that is quickly available externally (e.g. on paper). The human brain has weaknesses in retrieving information.

  2. The human problem-solving space is relatively small. Only little information can be processed simultaneously. In short-term memory, no more than 5–9 information or sense units, so-called chunks, can be processed simultaneously (Miller, 1994, 2003).

  3. The brain tires and can only solve problems continuously for a limited period. Continuous thinking over a longer period is accompanied by an increasing frequency of errors.

  4. The brain works relatively slowly. The speed, however, depends on the type and the familiarity of the problem type: the lightning human pattern recognition of whether an apple is fresh or rotten vs the inertia of mental arithmetic.

Besides the capacitive “skill deficits”, there are behavioural deficits. For example, people are content to achieve their individual aspirations and do not necessarily strive for the maximum achievable or they make decisions for personal benefit rather than for the benefit of the company. Cognitive limitations and behavioural patterns have been widely discussed in the literature. The long list of identified “biases” bears witness to this. The following examples show the typical human deficits in forecasting (Barberis and Thaler, 2003; de Graaf, 2018; Forbes, 2009):

  1. Overestimating oneself often leads to optimistic forecasts.

  2. People unconsciously align forecasts with an “anchor” or orientation point. In forecasting, for example, this can be the budget or the previous year's values.

  3. The willingness to accept new information increases when the information supports the intention of the decision-maker.

  4. Power-related distortions of information, such as loss of reputation, mean that forecasts are maintained even when the opposite is already apparent.

  5. Discounting: as remote problems seem less significant than immediate ones, negative developments are not immediately communicated.

From the above examples, it is clear that the use of automatic forecasts can increase the quality of forecasts. On the one hand, a larger amount of information can be included in the forecast, and on the other hand, machine forecasts are not subject to the distortions caused by interests (“unemotional forecast”). However, caution is advised. An essential principle of AI is the ability to learn and improve. Optimisation algorithms can determine the accuracy of the model and adapt it to increase future accuracy. Even if AI systems have no self-interest, human biases can be learned unconsciously through the data provided to the system.

In addition to the limitations of the human brain, one of its major strengths should be mentioned. The human brain constantly solves problems that are not posed by the human brain. The brain does not have a static structure; it is rather constantly reorganised. Thus, problems are spontaneously seen in a new way. This characterises the creativity and innovative ability of the human being and is an essential difference from machines.

Human–machine collaboration

In our previous sections, we showed that

  1. AI systems or machine forecasts are still not very widespread and are still in their infancy but are considered to be of great importance and have great potential for the future.

  2. The ideal of accurate forecasts remains unattainable even in the age of AI, but their use can improve human forecasting capabilities and automate or support the creation process.

  3. Humans also have cognitive abilities that machines do not (yet) have (Kulkarni, 2019; Väyrynen and Laari-Salmela, 2015).

This raises the question of how to best use machine forecasts. Should they replace or supplement human forecasts? Similar to autonomous driving, different levels of support can be distinguished from “Assisted Intelligence, Augmented Intelligence, Autonomous Intelligence” (Jarrahi, 2018; Munoko et al., 2020; Shank and DeSanti, 2018). With assisted intelligence, the entire forecast process remains in the hands of the controller. The AI or the automatic forecast works according to the concrete requirements of the controller, and the controller decides on the result of the forecast (see Figure 3).

With augmented intelligence, the forecast of the controller and the automatic forecast run in parallel. The differences are analysed, and the controller or manager decides which result is used. An example of augmented intelligence in the forecast process is SAP AG. If the deviation of the forecasts exceeds the threshold value, the affected areas must explain why they think they are right and not the system. In the last stage of autonomous intelligence, the automatic forecast replaces the human forecast, and both controllers and managers rely on the AI system (see Figure 4).

Therefore, AI-based decision-making in accounting must use AI for the right purposes and processes given the specific context and situation, with each context raising different dominant challenges. Figure 5 illustrates an example, in which AI and humans would support each other in different ways in three different scenarios. What they all have in common is that the human brain would innovate and direct, whereas the AI would analyse raw data in various different ways depending on the purpose and provide an early interpretation of the findings. This detailed examination of the processes also demonstrates the necessity for future accounting employees to understand how to make competent and situational AI use (Briggs and Makice, 2012) and how future accounting work would appear with AI (Brougham and Haar, 2017; Lehner et al., 2021).

In an uncertainty scenario where few risk functions are known, swift decisions are necessary, and the timely information and automatic detection of anomalies are key (Brougham and Haar, 2017; Donning et al., 2019). Objectivity and transparency are crucial to this scenario. In a complexity scenario, with an abundance of big data, the data processing would easily exceed the human cognitive capabilities, leading to an information overload (Falschlunger et al., 2016; Perkhofer and Lehner, 2019). A different support by AI seems appropriate in terms of the data analysis of unidentified features and correlations (Quattrone, 2016) to guide the decision-making (Huttunen et al., 2019), with the support of clever visualisations (Falschlunger et al., 2015). The third scenario is also referred to by Jarrahi (2018) as an “equivocality” scenario. This scenario may be the most complex scenario for the human–machine symbiosis as it entails predominant challenges, such as ambiguity and, thus, the objectivity and the trust and accountability of those who make decisions. AI can analyse sentiments using text-interpretation algorithms and develop new representations of these unstructured data to support the decision-making (Quattrone, 2017).

Finally, and in addition to the level of support, the question of the level of entitlement to the AI must be considered. Similar to the analytics development stages, the expectation level to the AI system can merely be the provision of the relevant deviation information as a basis for the actual forecast (descriptive and diagnostic). In most cases, however, companies are not satisfied with this and implement a quantitative forecast (predictive). The highest demands are placed on an AI system that forecasts not only the probable outcome but also the necessary measures to achieve it (prescriptive). From today's perspective, however, this still seems to be a vision of the future.

Discussing research agenda in five areas

Summing up our deliberations on AI and controlling, we invite authors to follow up our call for future research and connect with their research to the ongoing discourse on the digitalisation of accounting in the Journal of Applied Accounting Research. The outcome of our collective research should also inform society on the broader opportunities and threats stemming from AI-based controlling and help them form an educated opinion on the implied societal changes with all of the corresponding ethical challenges.

At this point, we would like to acknowledge the fantastic support of our colleagues in drafting this research agenda based on their earlier works in Lehner et al. (2019). In a focus group, moderated by a co-researcher, the authors together with the above-mentioned experts in this field discussed the theoretical conceptions in the earlier sections of this article and from there first inductively derived five research areas and subsequently compiled a list of the most-pressing research questions for each. The resulting list was then presented and discussed at a large finance and accounting conference and participants (N = 65) were able to vote on the relevancy of those questions via software Mentimeter (on a scale of 1–5 [highest]). Those questions with a relevancy of >3 are now presented as clustered by their research areas.

Research area 1: organisational transformation

Many scholars would agree that any change of such gravity in accounting most likely goes together with a substantial organisational and societal transformation (Troshani et al., 2019). Depending on the chosen theoretical framework, however, causations can be assumed in either or even neither direction between these two levels. Thus, the interplay between the nucleus of accounting transformation and the immediate organisational context as well as the larger societal context will be one of the important issues from an organisational science perspective.

Insights from empirical studies framed, for example, in a neoinstitutional theoretical setting that accepts the separation of human actors and structure (such as the norms and traditions of the accounting profession) and takes a certain drive for standardisation and isomorphic adaption for granted will certainly provide valuable starting points. Moreover, Giddens structuration theory (Englund and Gerdin, 2014) with its notion of transcending the structure–agent separation towards a system of accountability with the situated practices (Conrad, 2014), Latour's actor–network theory (ANT) that adds non-humans as actors (Latour, 2005; Robson and Bottausci, 2018) and creates fluid accounting objects that are translated into a system and configuration theory (and earlier contingency theory) with its focus on the organisational gestalt or habitus (Bourdieu and Nice, 1977) being shaped by a complex contextual interplay (Otley and Berry, 1980) may be other worthwhile perspectives to understand and explain the organisational changes that we expect to see in the coming years.

What all of these theoretical approaches have in common is that they lean towards a pragmatic worldview, which is not limited by the often artificially conjured dichotomy of a realist versus constructivist ontology in the social sciences and thus allows researchers to embrace a variety of epistemological approaches with a range of suitable research designs. This may also be particularly necessary because the sheer dimensions in terms of size and speed (Crookes and Conway, 2018) and, particularly, the interconnectedness between the levels on which change is about to happen will potentially transcend the current literature on the change in organisations, while at the same time, we expect much of the current theory of change to remain at least partially valid in this new, rapidly changing context. Following Edmondson and McManus (2007), we believe that such an intermediate state of theory needs to be approached using mixed-methods designs, combining inductive and deductive reasoning.

From this perspective, we identified the following salient questions:

  1. How will future accounting organisations look like in terms of structure and hierarchies (Kruskopf et al., 2020)?

  2. What is the role of societal values and their transformation in a digital age (Diller et al., 2020; Troshani et al., 2019; Vial, 2019) in the changes in the “whatness” of accounting?

  3. How can further system-theoretic and cybernetic approaches help to mitigate the overpromises of AI in terms of organisational capabilities?

  4. To what extent should AI-based robots (Cooper et al., 2019; Rozario and Vasarhelyi, 2018) be seen as actors in a network and how can we find out about their agency?

  5. How will AI transform not only the practices but also the structure as a result of their enactment?

  6. What is the role of technological leadership and change management (Makrygiannakis and Jack, 2016) in this?

Research area 2: human–machine collaboration

A strong focus on the human and societal factors in the transformation towards AI-based management accounting seems timely and apt. On the one hand, it is certainly pressing from a practice point of view as the technological advancements will inevitably have a strong impact on the existing roles, duties and the corresponding skills of workers, managers and recipients of reports in the accounting profession (Neely and Cook, 2011), as well as on the stakeholders in general. On the other hand, we need to identify the ethical challenges in theory (Alles, 2020) to come up with normative agreements on how we want such a collaboration to look like.

For the employees in the field, we need to understand the new job roles and matching qualifications that are necessary to not only persist in this new area but also to help deal with the aberrations that any change the process will inevitably bring, with the ultimate goal to further develop the accounting profession (Leitner-Hanetseder et al., 2021). Questions in this area will be about the career prospects and related skills and about how our education systems can deal with the demand, along with those about the necessary tools to support human cognition given a highly abstract and aggregated level of information (such as visualisations and interactions), those about the psychological factors when it comes to change management and the necessity to adapt and finally those of power and control. In this, the Foucauldian perspectives on what constitutes power from a critical discourse perspective may help to identify problematic developments and allow us to raise the right questions in the society. The metatheories of capabilities or the resource-based view (RBV) (Alexy et al., 2018) may provide other suitable and less critical approaches to understand and guide the interplay between organisational leadership and the role of humans in an AI-augmented world (Lehner et al., 2021). From a strategic management perspective, these theories may help us understand how a competitive advantage can be created and maintained given such rapid organisational transformations.

The decisive change in this collaboration for individuals can be seen as future AI will not only provide the decision-relevant information but also propose the decision itself on the basis of this very information. Following these lines of thought, how to ensure a bias-free cognition and the necessary transparency leading to this decision, as well as who should be held accountable (Munoko et al., 2020) will be amongst the most pressing issues. Thus, from the perspective of the individuals having to deal with the output and the decision-making of an AI system, several questions will arise. Such questions will not only include the role of trust in the decisions of such systems but also comprise more collective fears concerning how sustainable a functionalist, AI-based assessment without human values can be.

From this perspective, we identified the following salient questions:

  1. What will drive the dynamics in a geographically disembodied, highly distributed and heterogeneous AI-empowered accounting team of the future (Leitner-Hanetseder et al., 2021)?

  2. Can we find an optimal way in terms of efficiency, effectiveness and humanist values for a collaboration between AI and humans in different contexts and tasks?

  3. Who will be the new “powerful” actors in such a human–machine collaboration?

  4. What will be the necessary skills to cope with the rising demands in terms of a “digital fluency”?

  5. How should and could accounting education incorporate the necessary adaptions to not only train students in the application but also understand the larger picture and be aware of the humanist values and the ethical challenges of an AI application?

Research area 3: regulation

From the regulatory perspective, the need for transparency of the internal processes and internal decision-making criteria of the AI to comply, for example, with the General Data Protection Regulation (GDPR) criteria is still not sufficiently solved, and it may take a while to reach a satisfactory level. In the meantime, accounting and information systems researchers may need to look into which levels of transparency for which applications are really necessary. There will certainly be a difference from the perspectives of regulatory requirements, internal advisory systems based on AI-derived cost predictions and external compliance reports based on true big data when it comes to traceability, confirmability and finally, transparency. To solve the problem of transparency and accountability, researchers need to first fully understand how deep learning systems simulate cognition, particularly when it comes to multifunctional networks. The learning process based on feedback loops, which leads, for example, to the known problems of overfitting and easily introduces a potential sample bias, may provide more hurdles to overcome before a truly transparent, traceable and accountable AI system is possible (Buhmann et al., 2019; Leicht-Deobald et al., 2019; Martin, 2019).

Besides the necessary regulatory changes, for example, those concerning labour rights and standards, taxation and data protection, other interesting insights may include the necessity to redefine the role of auditors and authorities to ensure compliance with these changes. Other worthwhile endeavours may be to define how accounting standards need to adapt to better reflect the quality and the worth of the collected data and the derived intelligence of such intangible assets.

Finally, research needs to carefully monitor and guide regulatory communication that not only is comprehensible by humans but also can be processed by accounting systems, such as the already existing International Financial Reporting Standards (IFRS) or Financial Accounting Standards Board (FASB) codifications.

From this perspective, we identified the following salient questions:

  1. How can regulations be translated into a machine-readable format and to what extent will AI be able to interpret them teleologically?

  2. Do we need additional IFRS and US Generally Accepted Accounting Principles (US-GAAP) regulations on data as assets (Birch et al., 2020)?

  3. How can we find a balance between stifling over-regulation and the potentially negative externalities of unsupervised innovation?

  4. Who can and should be held accountable in terms of decision-making and the outcomes: AI or management?

  5. How to algorithmically define and enforce data rights and ensure protection and compliance to data regulations (Gruschka et al., 2018)?

  6. What will be the role of big data and public or private blockchains in the assurance of reporting (Bonyuet, 2020; Qasim and Kharbat, 2019)?

Research area 4: technological innovation and implications for accounting

Research in this area needs to look at information technology (IT) architectures and infrastructures, how these technological artefacts influence the practice and control of accounting systems and the role of big data and algorithms as drivers (Baker and Andrew, 2019; Huttunen et al., 2019; Salijeni et al., 2018). The above-described necessity to include the external data of various sources and with various formats into a vast, virtual data repository will bring forth many questions. Moreover, variable-efficient problem modelling that is informed by information-theoretical concerns of which data are needed and what may be available in abundance would catapult the current solution towards a considerably higher practical usability. For this, accounting and information science scholars will need to work together with data scientists to identify both theoretical frameworks and the corresponding algorithmic solutions (Kellogg et al., 2019; Kemper and Kolkman, 2019).

From this perspective, we identified the following salient questions:

  1. How should the ideal infrastructure be laid out depending on the tasks and context, including the considerations on cloud versus internal storage and computing power, speed, scalability and flexibility and, most importantly, availability?

  2. How can AI base its calculations and decisions on just the relevant information and use its resources efficiently, for example, through clever feature selection and by avoiding overly complex models. In other words, how can the human domain know-how and the related heuristics be translated into the inner workings of AI and how can algorithms such as ridge or L2 regressions help to avoid overfitting to enhance external validity (Crowder, 2016)?

  3. How can standardisation not only help but also potentially diminish the (open) data exchange depending on the various sources in various contexts?

  4. Following the previous question, how can the inner workings of a deep learning network as the basis of an AI system be made transparent and traceable (Kemper and Kolkman, 2019) and how can the system create targeted communication (including visualisation) of complex data structured on an aggregated level that still allows us to validate the outcome by interaction?

  5. Related to this, how can an isomorphic bias, based on hindsight learning from the machine-based decisions (leaving out alternatives), be avoided, and what security measures need to be in place to control these problems (Glikson and Woolley, 2020)?

  6. How to ensure the practical decision-making of AI when the existing data do not sufficiently specify the problem at hand?

  7. How will quantum computing in the future affect the Bremermann limit of information processing power?

Research area 5: ethical implications

Finally, and more importantly from a normative perspective (Alzola, 2017; Stahl and Flick, 2011), research needs to bring in the different voices from society about how ethical boundaries need to be in place when it comes to the decision-making of AI-powered accounting systems (Dwivedi et al., 2019; Glikson and Woolley, 2020; Munoko et al., 2020). The role of cultural standards and, potentially, the role of the firm itself need to be revisited. We already see, for example, in entrepreneurship research with its recent discussions on hybrid business models that environmental, social and commercial factors need to be taken into account when making strategic decisions. Such factors may be under-represented as the more unstructured and less-quantifiable non-financial information may be harder to process and considerably scarcer than the “hard” and easy-to-digest financial information. From the current streams of literature in digital accounting, it becomes clear that any ethical considerations need to be enforced by rules and regulations and cannot be based any more on the personal human values of managers (Kellogg et al., 2019; Kirkpatrick, 2016; Kovacova et al., 2019; Martin, 2019). The AI answers to how a data-derived strategy shall be put into place needs to be carefully monitored, and a societally accepted way of integrating the people, planet and profit thoughts into the mere functionalist approaches of non-human actors has to be found in a process that includes more than industry and policymakers.

Any ethical considerations – as far as such considerations are even possible on a meta-level without a cultural context – will need to be inserted as rules, and the impact of a potential sample bias in machine learning has to be looked at from various critical angles. However, such AI data-derived decision-making cannot have its merits as nepotism and other irrational behaviour of managers will be potentially reduced. Therefore, agency theory may well interplay with philosophical and (critical) sociological approaches to build a solid foundation of what the role of ethics should be in AI-based accounting (Bogt and Scapens, 2019; ter Bogt and Scapens, 2019).

From this perspective, we identified the following salient questions:

  1. How can social justice perspectives guide our thinking on the implementation of AI and its impact on the workforce? (Fia and Sacconi, 2018)

  2. What is the role of “good” corporate governance (Haslam et al., 2019; Stacchezzini et al., 2020) in this and how can it be implemented?

  3. Can AI ever come to make ethical decisions given that the underlying algorithms (Kellogg et al., 2019; Lindebaum et al., 2020; Martin, 2019) might be biased and non-transparent?

  4. To what extent can we take up the existing utopian and dystopian fictional narratives, such as Asimov's three laws of robotics and machine meta-ethics (Anderson, 2007) as guidance for our quest in creating ethical regulations in robotic process automation (Gotthardt et al., 2020)?

  5. Will the completely rational thinking of AI bring forward the integrated injustice in a system that is based on short-terminism and shareholder value rather than on humanist value? Then, do we need a discussion of societal values in the age of AI first?

Conclusion

This paper set out to first explore the potential limits of AI and controlling based on complexity and system-theoretical deliberations. From there, we derived a future research outlook of the possible applications and provided insights into a future complementary of human–machine information processing. While this study was conceptual in its nature, a theoretically informed, semi-systematic literature review from various disciplines provided the background of the discussion, and we directed the reader to the relevant examples of the identified perspectives.

With this, we also wanted to demonstrate how a blend of theoretical foundation, academic validation together with behavioural insights and derived policy advice can help a larger target audience in their decision-making and conduct around AI in accounting.

As elaborated in the article, AI was found to be severely limited in its application to controlling with respect to complexity science and cybernetics. A total of three such limits, the Bremermann limit, the problems with the partial detectability and controllability of complex systems and the inherent biases in the complementarity of human and machine information processing, were presented as the salient and representative examples. We then went on to illustrate how a human–machine collaboration that made specific use of AI depending on the task and the environment could look like.

Finally, on the basis of our deliberations, we established a multidisciplinary research agenda consisting of five areas: organisational transformation, human–machine collaboration, regulation, technological innovation and ethical considerations. For each of these areas, we proposed different angles that could revolutionise the application of AI in accounting leadership and provided empirically validated, corresponding research questions with potential theoretical underpinnings as well as methodological considerations to the community.

With this early research, we aim to start the discourse and invite the larger scholarly accounting community to embrace the new topic and field. From a practical side, our deliberations should also serve teaching professionals, corporate executives, public policymakers and civil servants being confronted with questions around controlling and AI in a larger accounting context.

Figures

Influences and the digitisation of forecasting

Figure 1

Influences and the digitisation of forecasting

Exemplary structure of complex systems

Figure 2

Exemplary structure of complex systems

Controlling circle and forecast analysis

Figure 3

Controlling circle and forecast analysis

Fields of application and degree of support of machine forecasts

Figure 4

Fields of application and degree of support of machine forecasts

AI and human collaboration

Figure 5

AI and human collaboration

References

Alexy, O., West, J., Klapper, H. and Reitzig, M. (2018), “Surrendering control to gain advantage: reconciling openness and the resource‐based view of the firm”, Strategic Management Journal, Vol. 39 No. 6, pp. 1704-1727.

Alles, M.G. (2020), “AIS-ethics as an ethical domain: a response to Guragai, Hunt, Neri and Taylor (2017) and Dillard and Yuthas (2002)”, International Journal of Digital Accounting Research, Vol. 20, pp. 2-29, doi: 10.4192/1577-8517-v20_1.

Alzola, M. (2017), “Beware of the watchdog: rethinking the normative justification of Gatekeeper liability”, Journal of Business Ethics, Vol. 140 No. 4, pp. 705-721, doi: 10.1007/s10551-017-3460-3.

Anderson, S.L. (2007), “Asimov's ‘three laws of robotics’ and machine metaethics”, AI and Society, Vol. 22 No. 4, pp. 477-493, doi: 10.1007/s00146-007-0094-5.

Baker, M. and Andrew, J. (2019), “Call for papers: big data and accounting”, Critical Perspectives on Accounting, Vol. 59, pp. I-II, doi: 10.1016/s1045-2354(19)30023-1.

Barberis, N. and Thaler, R. (2003), “A survey of behavioral finance”, Handbook of the Economics of Finance”, Vol. 1, pp. 1053-1128.

Batistič, S. and der Laken, P. (2019), “History, evolution and future of big data and analytics: a bibliometric analysis of its relationship to performance in organizations”, British Journal of Management, Vol. 30 No. 2, pp. 229-251, doi: 10.1111/1467-8551.12340.

Bennett, N. and Lemoine, G.J. (2014), “What a difference a word makes: understanding threats to performance in a VUCA world”, Business Horizons, Vol. 57 No. 3, pp. 311-317.

Birch, K., Chiappetta, M. and Artyushina, A. (2020), “The problem of innovation in technoscientific capitalism: data rentiership and the policy implications of turning personal digital data into a private asset”, Policy Studies, Vol. 41 No. 5, pp. 1-20.

Bogt, H.J.t. and Scapens, R.W. (2019), “Institutions, situated rationality and agency in management accounting: a research note extending the Burns and Scapens framework”, Accounting, Auditing and Accountability Journal, Vol. 32 No. 6, pp. 1801-1825.

Bonyuet, D. (2020), “Overview and impact of blockchain on auditing”, International Journal of Digital Accounting Research, Vol. 20, pp. 31-43, doi: 10.4192/1577-8517-v20_2.

Bourdieu, P. and Nice, R. (1977), Outline of a Theory of Practice, Vol. 16, Cambridge University Press, Cambridge.

Brands, K. and Holtzblatt, M. (2015), “Business analytics: transforming the role of management accountants”, Management Accounting Quarterly, Vol. 16 No. 3, pp. 1-12.

Bremermann, H.J. (1962), “Optimization through evolution and recombination”, Self-organizing Systems, Vol. 93, p. 106.

Bremermann, H.J. (1963), “Limits of genetic control”, IEEE Transactions on Military Electronics, Vol. 7, Nos 2 and 3, pp. 200-205.

Bremermann, H.J. (1982), “Minimum energy requirements of information transfer and computing”, International Journal of Theoretical Physics, Vol. 21 Nos 3-4, pp. 203-217.

Briggs, C. and Makice, K. (2012), Digital Fluency: Building Success in the Digital Age: SociaLens, University of Indianapolis, IN.

Brougham, D. and Haar, J. (2017), “Smart technology, artificial intelligence, robotics, and algorithms (STARA): employees' perceptions of our future workplace”, Journal of Management and Organization, Vol. 24 No. 2, pp. 239-257, doi: 10.1017/jmo.2016.55.

Buhmann, A., Paßmann, J. and Fieseler, C. (2019), “Managing algorithmic accountability: balancing reputational concerns, engagement strategies, and the potential of rational discourse”, Journal of Business Ethics, Vol. 163 No. 2, pp. 265-280, doi: 10.1007/s10551-019-04226-4.

Caglio, A. (2003), “Enterprise Resource Planning systems and accountants: towards hybridization?”, European Accounting Review, Vol. 12 No. 1, pp. 123-153, doi: 10.1080/0963818031000087853.

Conrad, L. (2014), “Reflections on the application of and potential for structuration theory in accounting research”, Critical Perspectives on Accounting, Vol. 25 No. 2, pp. 128-134, doi: 10.1016/j.cpa.2012.12.003.

Cooper, L.A., Holderness, D.K., Sorensen, T.L. and Wood, D.A. (2019), “Robotic process automation in public accounting”, Accounting Horizons, Vol. 33 No. 4, pp. 15-35, doi: 10.2308/acch-52466.

Crookes, L. and Conway, E. (2018), “Technology challenges in accounting and finance”, Contemporary Issues in Accounting, Springer, pp. 61-83.

Crowder, J.A. (2016), “AI inferences utilizing occam abduction”, Paper Presented at the 2016 Annual Conference of the North American Fuzzy Information Processing Society (NAFIPS).

de Graaf, F.J. (2018), “Ethics and behavioural theory: how do professionals assess their mental models?”, Journal of Business Ethics, Vol. 157 No. 4, pp. 933-947, doi: 10.1007/s10551-018-3955-6.

Diller, M., Asen, M. and Späth, T. (2020), “The effects of personality traits on digital transformation: evidence from German tax consulting”, International Journal of Accounting Information Systems, Vol. 37, 100455.

Dörner, D., Kreuzig, H., Reither, F. and Stäudel, T. (1983), “Lohhausen. Vom Umgang mit Unbestimmtheit und Komplexität. Bern”, Stuttgart, Wien, Hans Huber Verlag.

Donning, H., Eriksson, M., Martikainen, M. and Lehner, O.M. (2019), “Prevention and detection for risk and fraud in the digital age-the current situation”, ACRN Oxford Journal of Finance and Risk Perspectives, Vol. 8, pp. 86-97.

Dwivedi, Y.K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., Duan, Y., Dwivedi, R., Edwards, J., Eirug, A., Galanos, V., Ilavarasan, P.V., Janssen, M., Jones, P., Kar, A.K., Kizgin, H., Kronemann, B., Lal, B., Lucini, B., Medaglia, R., Meunier-FitzHugh, K.L., Meunier-FitzHugh, L.C.L., Misra, S., Mogaji, E., Sharma, S.K., Singh, J.B., Raghavan, V., Raman, R., Rana, N.P., Samothrakis, S., Spencer, J., Tamilmani, K., Tubadji, A., Walton, P. and Williams, M.D. (2019), “Artificial Intelligence (AI): multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy”, International Journal of Information Management. doi: 10.1016/j.ijinfomgt.2019.08.002.

Earley, C.E. (2015), “Data analytics in auditing: opportunities and challenges”, Business Horizons, Vol. 58 No. 5, pp. 493-500, doi: 10.1016/j.bushor.2015.05.002.

Edmondson, A.C. and McManus, S.E. (2007), “Methodological fit in management field research”, Academy of Management Review, Vol. 32 No. 4, pp. 1246-1264.

Englund, H. and Gerdin, J. (2014), “Structuration theory in accounting research: applications and applicability”, Critical Perspectives on Accounting, Vol. 25 No. 2, pp. 162-180, doi: 10.1016/j.cpa.2012.10.001.

Falschlunger, L., Treiblmaier, H., Lehner, O. and Grabmann, E. (2015), “Cognitive differences and their impact on information perception: an empirical study combining survey and eye tracking data”, Information Systems and Neuroscience, Springer, pp. 137-144.

Falschlunger, L., Lehner, O. and Treiblmaier, H. (2016), “InfoVis: the impact of information overload on decision making outcome in high complexity settings”, Proceedings of the 2016 SIG HCI, Dublin, pp. 1-5.

Feser, D. (2020), “Hürden für den Einsatz künstlicher Intelligenz”, ZfO, No. 1, p. 17.

Fia, M. and Sacconi, L. (2018), “Justice and corporate governance: new insights from rawlsian social contract and sen's capabilities approach”, Journal of Business Ethics, Vol. 160 No. 4, pp. 937-960, doi: 10.1007/s10551-018-3939-6.

Forbes, W. (2009), Behavioural Finance, John Wiley & Sons, Wiley, Chichester.

Forrester, J.W. (1974), Das Intuitionswidrige Verhalten Sozialer Systeme, Das globale Gleichgewicht, Stuttgart, pp. 13-37.

Gatherer, D. (2007), “Less is more: the battle of Moore's law against Bremermann's limit on the field of systems biology”, BMC Systems Biology, Vol. 1 No. S1, p. P53.

Glikson, E. and Woolley, A.W. (2020), “Human trust in artificial intelligence: review of empirical research”, The Academy of Management Annals, Vol. 14 No. 2, pp. 627-660.

Gorelik, G. (2009), Bremermann's Limit and cGh-Physics, arXiv preprint arXiv:0910.3424.

Gotthardt, M., Koivulaakso, D., Okyanus, P., Saramo, C., Martikainen, M. and Lehner, O.M. (2020), “Current state and challenges in the implementation of smart robotic process automation in accounting and auditing”, ACRN Journal of Finance and Risk Perspectives, Vol. 9 No. 1, pp. 90-102.

Gruschka, N., Mavroeidis, V., Vishi, K. and Jensen, M. (2018), “Privacy issues and data protection in big data: a case study analysis under GDPR”, Paper Presented at the 2018 IEEE International Conference on Big Data (Big Data).

Haefner, K. (2000), Psychische Mobilität mit Informationstechnik-ein zentrales Konzept der computerisierten Gesellschaft, TUEV, Bremen.

Harris, R.D.F. and Wang, P. (2019), “Model-based earnings forecasts vs. financial analysts' earnings forecasts”, The British Accounting Review, Vol. 51 No. 4, pp. 424-437, doi: 10.1016/j.bar.2018.10.002.

Haslam, J., Chabrak, N. and Kamla, R. (2019), “Emancipatory accounting and corporate governance: critical and interdisciplinary perspectives”, Critical Perspectives on Accounting, Vol. 63, doi: 10.1016/j.cpa.2019.102094.

Hofmann, C. and Rothenberg, N.R. (2019), “Forecast accuracy and consistent preferences for the timing of information arrival”, Contemporary Accounting Research, Vol. 36 No. 4, pp. 2207-2237, doi: 10.1111/1911-3846.12499.

Huttunen, J., Jauhiainen, J., Lehti, L., Nylund, A., Martikainen, M. and Lehner, O.M. (2019), “Big data, cloud computing and data science applications in finance and accounting”, Journal of Finance and Risk Perspectives Issn, No. 16, pp. 2305-7394.

Jarrahi, M.H. (2018), “Artificial intelligence and the future of work: human-AI symbiosis in organizational decision making”, Business Horizons, Vol. 61 No. 4, pp. 577-586.

Kellogg, K., Valentine, M. and Christin, A. (2019), “Algorithms at work: the new contested terrain of control”, The Academy of Management Annals, Vol. 14 No. 1, doi: 10.5465/annals.2018.0174.

Kemper, J. and Kolkman, D. (2019), “Transparent to whom? No algorithmic accountability without a critical audience”, Information, Communication and Society, Vol. 22 No. 14, pp. 2081-2096.

Kirkpatrick, K. (2016), “Battling algorithmic bias”, Communications of the ACM, Vol. 59 No. 10, pp. 16-17, doi: 10.1145/2983270.

Kovacova, M., Kliestik, T., Pera, A., Grecu, I. and Grecu, G. (2019), “Big data governance of automated algorithmic decision-making processes”, Review of Contemporary Philosophy, Vol. 18, pp. 126-132.

Kruskopf, S., Lobbas, C., Meinander, H., Söderling, K., Martikainen, M. and Lehner, O. (2020), “Digital accounting and the human factor: theory and practice”, ACRN Journal of Finance and Risk Perspectives, Vol. 9 No. 1, pp. 78-89, doi: 10.35944/jofrp.2020.9.1.006.

Kulkarni, S.B. (2019), “Review of humans and machines at work: monitoring, surveillance, and automation in contemporary capitalism by phoebe V. Moore, Martin Upchurch, and Xanthe Whittaker”, Journal of Business Ethics, Vol. 161 No. 1, pp. 237-241, doi: 10.1007/s10551-019-04304-7.

Latour, B. (2005), Reassembling the Social: An Introduction to Actor-Network-Theory, Oxford University Press, Oxford.

Lehner, O.M., Leitner-Hanetseder, S. and Eisl, C. (2019), “The whatness of digital accounting: status quo and ways to move forward”, ACRN Journal of Finance and Risk Perspectives, Vol. 8 No. 2, pp. I-X, doi: 10.35944/jofrp.2019.8.2.001.

Lehner, O.M., Leitner-Hanetseder, S. and Eisl, C. (2021), “Dynamics and structures in AI-driven digital accounting and auditing: a structuration perspective and research outlook”, Accounting, Auditing and Accountability Journal.

Leicht-Deobald, U., Busch, T., Schank, C., Weibel, A., Schafheitle, S., Wildhaber, I. and Kasper, G. (2019), “The challenges of algorithm-based HR decision-making for personal integrity”, Journal of Business Ethics, Vol. 160 No. 2, pp. 377-392, doi: 10.1007/s10551-019-04204-w.

Leitner-Hanetseder, S., Lehner, O.M., Forstenlechner, C. and Eisl, C. (2021), “A profession in transition: actors, tasks, and roles in AI-based accounting organisations”, Journal of Applied Accounting Research, Vol. 22.

Lepori, B. and Montauti, M. (2020), “Bringing the organization back in: flexing structural responses to competing logics in budgeting”, Accounting, Organizations and Society, Vol. 80, doi: 10.1016/j.aos.2019.101075.

Lindebaum, D., Vesa, M. and den Hond, F. (2020), “Insights from “the machine stops” to better understand rational assumptions in algorithmic decision making and its implications for organizations”, Academy of Management Review, Vol. 45 No. 1, pp. 247-263, doi: 10.5465/amr.2018.0181.

Luhman, J.T. and Boje, D.M. (2001), “What is complexity science? A possible answer from narrative research”, Emergence, A Journal of Complexity Issues in Organizations and Management, Vol. 3 No. 1, pp. 158-168.

Makrygiannakis, G. and Jack, L. (2016), “Understanding management accounting change using strong structuration frameworks”, Accounting, Auditing and Accountability Journal, Vol. 29 No. 7, pp. 1234-1258, doi: 10.1108/aaaj-08-2015-2201.

Malik, F. (1984), “Systems approach to management: Hopes, promises, doubts—a lot of questions and some afterthoughts”, Self-Organization and Management of Social Systems, Springer, pp. 121-126.

Malik, F. (2000), Strategie des Managements komplexer Systeme: ein Beitrag zur Management-Kybernetik evolutionärer Systeme. 5, Haupt Verlag, Bern.

Martin, K. (2019), “Ethical implications and accountability of algorithms”, Journal of Business Ethics, Vol. 160 No. 4, pp. 835-850, doi: 10.1007/s10551-018-3921-3.

Mikalef, P., Boura, M., Lekakos, G. and Krogstie, J. (2019), “Big data analytics capabilities and innovation: the mediating role of dynamic capabilities and moderating effect of the environment”, British Journal of Management, Vol. 30 No. 2, pp. 272-298, doi: 10.1111/1467-8551.12343.

Miller, G.A. (1994), “Reprint of the magical number seven, plus or minus two: some limits on our capacity for processing information”, Psychological Review, Vol. 101 No. 2, pp. 343-351.

Miller, G.A. (2003), “The cognitive revolution: a historical perspective”, Trends in Cognitive Sciences, Vol. 7 No. 3, pp. 141-144.

Munoko, I., Brown-Liburd, H.L. and Vasarhelyi, M. (2020), “The ethical implications of using artificial intelligence in auditing”, Journal of Business Ethics, Vol. 167, doi: 10.1007/s10551-019-04407-1.

Neely, M.P. and Cook, J.S. (2011), “Fifteen years of data and information quality literature: developing a research agenda for accounting”, Journal of Information Systems, Vol. 25 No. 1, pp. 79-108, doi: 10.2308/jis.2011.25.1.79.

Oll, J., Hahn, R., Reimsbach, D. and Kotzian, P. (2016), “Tackling complexity in business and society research: the methodological and thematic potential of factorial surveys”, Business and Society, Vol. 57 No. 1, pp. 26-59, doi: 10.1177/0007650316645337.

Otley, D.T. and Berry, A.J. (1980), “Control, organisation and accounting”, Accounting, Organizations and Society, Vol. 5 No. 2, pp. 231-244.

Perkhofer, L. and Lehner, O. (2019), “Using gaze behavior to measure cognitive load”, Information Systems and Neuroscience, Springer, pp. 73-83.

Qasim, A. and Kharbat, F.F. (2019), “Blockchain technology, business data analytics, and artificial intelligence: use in the accounting profession and ideas for inclusion into the accounting curriculum”, Journal of Emerging Technologies in Accounting, Vol. 17 No. 1, pp. 107-117.

Quattrone, P. (2016), “Management accounting goes digital: will the move make it wiser?”, Management Accounting Research, Vol. 31, pp. 118-122, doi: 10.1016/j.mar.2016.01.003.

Quattrone, P. (2017), “Embracing ambiguity in management controls and decision-making processes: on how to design data visualisations to prompt wise judgement”, Accounting and Business Research, Vol. 47 No. 5, pp. 588-612, doi: 10.1080/00014788.2017.1320842.

Reeves, M., Levin, S., Fink, T. and Levina, A. (2020), “Taming complexity”, Harvard Business Review, Vol. 98 No. 1, pp. 112-121.

Robson, K. and Bottausci, C. (2018), “The sociology of translation and accounting inscriptions: reflections on Latour and accounting research”, Critical Perspectives on Accounting, Vol. 54, pp. 60-75, doi: 10.1016/j.cpa.2017.11.003.

Rozario, A.M. and Vasarhelyi, M.A. (2018), “How robotic process automation is transforming accounting and auditing”, The CPA Journal, Vol. 88 No. 6, pp. 46-49.

Salijeni, G., Samsonova-Taddei, A. and Turley, S. (2018), “Big Data and changes in audit technology: contemplating a research agenda”, Accounting and Business Research, Vol. 49 No. 1, pp. 95-119, doi: 10.1080/00014788.2018.1459458.

Seufert, A. and Treitz, R. (2019), “Künstliche Intelligenz und Controlling”, Controller Magazin, No. 3, p. 20.

Shank, D.B. and DeSanti, A. (2018), “Attributions of morality and mind to artificial intelligence after real-world moral violations”, Computers in Human Behavior, Vol. 86, pp. 401-411, doi: 10.1016/j.chb.2018.05.014.

Stacchezzini, R., Rossignoli, F. and Corbella, S. (2020), “Corporate governance in practice: the role of practitioners' understanding in implementing compliance programs”, Accounting, Auditing and Accountability Journal, Vol. 33 No. 4, pp. 887-911, doi: 10.1108/aaaj-08-2016-2685.

Stahl, B.C. and Flick, C. (2011), “ETICA workshop on computer ethics: exploring normative issues”, Privacy and Identity Management for Life, pp. 64-77.

ter Bogt, H.J. and Scapens, R.W. (2019), “Institutions, situated rationality and agency in management accounting”, Accounting, Auditing and Accountability Journal, Vol. 32 No. 6, pp. 1801-1825, doi: 10.1108/aaaj-05-2016-2578.

Troshani, I., Locke, J. and Rowbottom, N. (2019), “Transformation of accounting through digital standardisation”, Accounting, Auditing and Accountability Journal, Vol. 32 No. 1, pp. 133-162, doi: 10.1108/aaaj-11-2016-2794.

Väyrynen, T. and Laari-Salmela, S. (2015), “Men, mammals, or machines? Dehumanization embedded in organizational practices”, Journal of Business Ethics, Vol. 147 No. 1, pp. 95-113, doi: 10.1007/s10551-015-2947-z.

Vial, G. (2019), “Understanding digital transformation: a review and a research agenda”, The Journal of Strategic Information Systems, Vol. 28 No. 2, pp. 118-144.

Warner, K.S. and Wäger, M. (2019), “Building dynamic capabilities for digital transformation: an ongoing process of strategic renewal”, Long Range Planning, Vol. 52 No. 3, pp. 326-349.

Zelinka, I., Saloun, P., Senkerik, R. and Pavelch, M. (2014), “Controlling complexity”, How Nature Works, Springer, pp. 237-276.

Zikopoulos, P. and Eaton, C. (2011), Understanding Big Data: Analytics for Enterprise Class Hadoop and Streaming Data, McGraw Hill, New York.

Corresponding author

Othmar M. Lehner can be contacted at: othmar.lehner@hanken.fi

Related articles