Ontological model for the acoustic management in a smart environment

Gabriela Santiago (CEMISID, Universidad de Los Andes, Mérida, Venezuela)
Jose Aguilar (CEMISID, Universidad de Los Andes, Mérida, Venezuela) (GIDITIC, Universidad EAFIT, Medellín, Colombia) (Dpto. Automática, Universidad de Alcalá, Alcalá de Henares, Spain)

Applied Computing and Informatics

ISSN: 2634-1964

Article publication date: 8 February 2022

525

Abstract

Purpose

The Reflective Middleware for Acoustic Management (ReM-AM), based on the Middleware for Cloud Learning Environments (AmICL), aims to improve the interaction between users and agents in a Smart Environment (SE) using acoustic services, in order to consider the unpredictable situations due to the sounds and vibrations. The middleware allows observing, analyzing, modifying and interacting in every state of a SE from the acoustics. This work details an extension of the ReM-AM using the ontology-driven architecture (ODA) paradigm for acoustic management.

Design/methodology/approach

This work details an extension of the ReM-AM using the ontology-driven architecture (ODA) paradigm for acoustic management. In this paper are defined the different domains of knowledge required for the management of the sounds in SEs, which are modeled using ontologies.

Findings

This work proposes an acoustics and sound ontology, a service-oriented architecture (SOA) ontology, and a data analytics and autonomic computing ontology, which work together. Finally, the paper presents three case studies in the context of smart workplace (SWP), ambient-assisted living (AAL) and Smart Cities (SC).

Research limitations/implications

Future works will be based on the development of algorithms for classification and analysis of sound events, to help with emotion recognition not only from speech but also from random and separate sound events. Also, other works will be about the definition of the implementation requirements, and the definition of the real context modeling requirements to develop a real prototype.

Practical implications

In the case studies is possible to observe the flexibility that the ReM-AM middleware based on the ODA paradigm has by being aware of different contexts and acquire information of each, using this information to adapt itself to the environment and improve it using the autonomic cycles. To achieve this, the middleware integrates the classes and relations in its ontologies naturally in the autonomic cycles.

Originality/value

The main contribution of this work is the description of the ontologies required for future works about acoustic management in SE, considering that what has been studied by other works is the utilization of ontologies for sound event recognition but not have been expanded like knowledge source in an SE middleware. Specifically, this paper presents the theoretical framework of this work composed of the AmICL middleware, ReM-AM middleware and the ODA paradigm.

Keywords

Citation

Santiago, G. and Aguilar, J. (2022), "Ontological model for the acoustic management in a smart environment", Applied Computing and Informatics, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/ACI-09-2021-0246

Publisher

:

Emerald Publishing Limited

Copyright © 2022, Gabriela Santiago and Jose Aguilar

License

Published in Applied Computing and Informatics. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) license. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this license may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

In artificial intelligence, the main process related to sound management is linked to voice recognition and voice commands. It is important to also consider the wide range of acoustic events that occur in a smart environment (SE). Thus, sound management in an SE is complex and requires new approaches.

As explained in Ref. [1], the Reflective Middleware for Acoustic Management (ReM-AM) extends AmICL [2, 3] to change the behavior of any SE from the acoustic perspective, implementing different components for sound management. In ReM-AM [4], there are three components: collecting audio Data (CAD) to create metadata and categorize sound information, interaction of system–user–agent (ISUA) to identify and analyze the information and determine the source and interactions and the decision-making (DM) to adapt the SE in terms of acoustic requirements. These components help the system to deploy a set of autonomic cycles of data analysis tasks [5, 6], which work together to exploit the acoustic in the self-organization of the SE. The previous work [7] introduced a state of the art of acoustics and its relationship with SEs, while [4] described the ReM-AM middleware. The autonomic cycles supported by ReM-AM are explained in Ref. [8], and a first attempt to implement the ontology-driven architecture (ODA) for acoustic management is analyzed in Ref. [1].

This work explains how the ODA paradigm [9] is integrated with ReM-AM through different models, such as the computation-independent model (CIM) focused on acoustic management (acoustic and sound ontologies), and the platform-independent model (PIM) focused on how ReM-AM is deployed on the computational platform (service-oriented architecture [SOA], data analytics and autonomic computing ontologies).

In this regard, some works have been developed about ontologies related to sound and acoustics. In the work [10], audio event recognition is considered. Particularly, the authors describe a data set of audio events obtained using a hierarchical ontology to stimulate the development of audio event recognizers. Also, in the work [11], a two-ontology-based neural network architecture for sound event classification is proposed, showing an improvement by using ontological information in classification tasks. In Ref. [12], the authors present an ontology-aware neural network for single-label and multilabel audio event classification. In Ref. [13], the author explains what sound ontology is in the context of analytic philosophy and the approaches that it encompasses. The paper’s intention is to relate both the depiction of sound and the auditory phenomena in the phenomenological tradition. In this case, analytic philosophy inquiries the nature of sounds, their location, auditory experience or the audible qualities based on an ontology approach.

Also, various works show the utilization of ontologies in SE. For example, Ref. [14] describes an ontology – called SmartEnv – to represent SEs which describes different aspects, including physical and conceptual characteristics of an SE. They use the ontology design pattern paradigm to define modularly the ontology. Ning et al. [15] propose an ontology for gathering sensor data with semantic meaning in smart homes. Particularly, this ontology allows the modelization of the context and the activities, using spatiotemporal and user profile information. Also, the work [16] discusses the use of ontologies to form an integrated platform for smart healthcare services, while the authors of [17] introduce ontologies to model dynamic context. In this work, however, the main focus is to combine the ontologies for the CIM and PIM layers but not to generate a new one, as proposed in Ref. [18] by Stoilos et al., where independently developed ontologies are integrated, with experimental evaluations to analyze the coherence among the integrated ontologies. The existing ontologies help the acoustic management in an SE. Particularly, the mapping approach was used in Ref. [19] for the integration process of the ontologies used following the ODA paradigm, which establishes identity relationships between entities of the O1 ontology and the O2 ontology through their common properties (e.g. subclass_of). The result of the mapping is reflected in a binding ontology that contains the equivalent entities, through which the two ontologies are connected.

On the other hand, to guide up the ontology development/engineering process, have been proposed methodologies that support the creation or integration of ontologies. These methodologies allow reusing existing knowledge resources (such as ontologies, thesauri, lexicons, etc.). Examples of these methodologies are Methontology and NeOn. Particularly, NeOn has been created for building ontology networks by reusing resources, using alignments and considering the evolution of the ontologies [20]. This paper used the ODA paradigm that has an implicit methodology for the design/integration of ontologies.

The main contribution of this work is the definition of an ontological framework for acoustic management in SEs. In particular, this work defines:

  1. The different ontologies required for the implementation of an acoustic management system

  2. The process of integration of the ontologies based on the methodological framework implicit in the ODA paradigm

  3. The integration of the ontological framework with autonomous cycles of data analysis tasks, which allows the acoustic management of SEs.

Previous works studied the implementation of ontologies for sound event recognition, among other specific tasks, but none of them developed an acoustic ontological framework for acoustic management in an SE using a middleware.

Specifically, this paper introduces the theoretical framework of this work composed of the AmICL middleware, ReM-AM middleware and the ODA paradigm. Then, the definition of the AmICL-ReM-AM middleware based on the ODA paradigm is presented. The utilization of this ODA-based approach is shown in several case studies (SWP, AAL and SC) to analyze its behavior in these SEs. Finally, there is a comparison with other works, followed by the conclusions that summarize the contributions and future works.

2. Theoretical framework

To understand the relation between AmICL, ReM-AM and ODA, it is important to describe each one in this section.

2.1 AmICL and ReM-AM

AmICL proposes a middleware that uses digital resources to improve the learning process, combining educational services from the cloud with multiagent systems (MAS). The architecture of AmICL is described in detail in Refs. [2, 3], and it was used as a base for the development of the ReM-AM architecture described in Ref. [4], where the layers are the same, except for the Audio Management Layer (AML) that was added for ReM-AM.

The AML has three components: CAD that characterizes the sound events by exploiting an auditory vocabulary to categorize them, by creating the metadata about it, and by defining the properties of each sound event. ISUA offers sound pattern recognition and smart analysis to identify the sources using algorithms provided by the cloud and to determine the interactions taking place in the SE. DM will revise the information obtained and the options available to acoustically adequate the SE and optimize it. Using this structure, ReM-AM has three main autonomic cycles, which have been defined using the concept of “autonomic cycles of data analysis tasks” that defines three phases in the cycle: observation, analysis and decision-making [5, 6]. These autonomic cycles are:

  1. The general acoustic management (GAM) follows the ABC paradigm [21], which consists of three processes: absorbing acoustic waves that create reverberation, blocking the dispersion of the waves to focus on the right direction and covering noises using noise-canceling processes.

  2. The intelligent sound analysis (ISA) aims at identifying the acoustic features of the context to discover the SE (an intelligent concert hall, a smart classroom, an ambient-assisted living [AAL], among others), and which possible tasks can be executed. It uses acoustic sensors in the SE, and the location of microphones will depend on the acoustic features of the space.

  3. The artificial sound perception (ASP) has the goal of offering an artificial perception of an SE. This autonomic cycle is based on work [12], but inversely. This work presents a virtual acoustic reality, but instead of generating an artificial acoustic environment, ReM-AM will offer an artificial perception of an SE.

These autonomic cycles, concerning the ODA paradigm, will be described in the next sections.

2.2 ODA paradigm

In informatics, an ontology is the specification of a conceptualization in a machine-readable way [9], defining a common vocabulary for data transmission in a determined domain. Its components are: class or type, relations, instances, individuals, properties and axioms. An ODA is based on a three-layer approach [9]:

  1. Computational-independent model (CIM) with a focus on functional requirements, nonfunctional properties, business rules or goals, data processing and the system domain in general. This describes the system from a computational-independent perspective, considering the domain and contextual information.

  2. Platform-independent model (PIM), which takes the domain information of the specifications from CIM and adds details for its real computational implementation. It defines the system in terms of a computational abstraction or a technology-neutral virtual machine.

  3. Platform-specific model (PSM), which combines the PIM with features and adds details for the development of the computational platform.

In ReM-AM, the focus will be on outlining the CIM layer and the PIM layer.

3. ODA paradigm for ReM-AM

3.1 Ontologies for ReM-AM based on the ODA paradigm

This section describes the ontological framework of ReM-AM that will be built following the ODA paradigm, which has an implicit methodology for the design/integration of ontologies.

ReM-AM considered a sound ontology and an acoustic ontology for the CIM layer as domain information to integrate the terminology and aspects required for acoustic management. The PIM layer contains the ontologies related to the computational implementation of the ReM-AM middleware, such as SOA capabilities, data analytic task description and autonomic computing.

The previous work [1] explains that the CIM layer in the ReM-AM middleware is composed by the sound ontology from Ref. [21], and the acoustic vocabulary developed in Ref. [22] (see Figure 1a). They have two description hierarchies: part-of or has, based on the inclusion relationship; and is-a, based on the integration relationship. The sound ontology proposes a terminology for sound representation and for integration of various sound-stream segregation systems. It has sound classes, definitions of individual sound attributes and their relationships. It uses just the part-of hierarchy.

In the acoustic vocabulary, there are four main classes: sensor, which allows controlling the acoustic data compiling; measurement, which defines the properties of the sensor measurements; time, which allows representing the intervals and instants when the measurements are made; and location, which allows representing the geographical positioning.

The PIM layer (see Figure 1b) presents the ontologies that are linked to the middleware: SOA, data analytics and autonomic computing aspects. The SOA ontology described in Ref. [19] explains the web services, the languages and standards and the cycle of a service. The data analytic ontology explained in Refs. [23, 24] describes the data itself and the analysis methods, considering that each data has a set of parameters and criteria that determines the types, tasks, techniques for its further analysis.

Works [6, 25] defined autonomic computing ontology as driven entities that define persons or objects, and objects that can be devices or applications. Finally, these entities can be managed by controllers with the capabilities to analyze, plan, act, monitor and control them.

In general, all the elements (classes, properties) of the aforementioned ontologies are used by our ontological approach in its inference processes.

3.2 ODA paradigm in the autonomic cycles of ReM-AM

The autonomic cycles of ReM-AM will use the ODA paradigm as follows:

  1. GAM: During the observation phase (see Step 1 in Figure 2), it will use the sensor, measurement and location classes of the acoustic ontology (CIM layer) to obtain information about the conditions that can modify the acoustic features in the SE, such as room shapes, wall materials, objects, temperature, air density, etc. Also, the classes of the sound ontology (CIM layer) will be used to classify sound among speech, machinery, musical or natural sound; hence, determining the diverse sources to indicate which sounds should be absorbed, blocked or covered in the next phase (see Step 2 of Figure 2). Finally, during the decision-making phase, the controllers and driven entities from the autonomic computing level (PIM layer) will be used to execute the specific actions, in order to modify the acoustical situation to delete nondesirable sounds or noises (see Step 3 in Figure 2).

  2. ISA: During the observation phase, the measurement class of the acoustic ontology (CIM layer) and the different classes of sound ontology (CIM layer) will be used to identify sound information (see Steps 1–2 in Figure 3), which is crucial to get a more accurate perception of the SE and the tasks that could take place. For the analysis phase, the monitoring class from the data analytics ontology (PIM layer) will be used to detect and discriminate users, setting them apart from the sources to determine the actual use of the SE and the improvements that can be done in terms of sound features (see Step 4 in Figure 3). Finally, the decision-making phase will use the SOA capabilities ontology (PIM layer) to recover the tasks that the analysis identified, such as increasing the absorptive materials adding panels, or the reflective surfaces for better dispersion (see Step 4 in Figure 3).

  3. ASP: During the observation phase, the acoustic ontology (CIM layer) will be used to locate sources helping to hold the focus on the specific sound event of frequency that should be treated (see Step 1 in Figure 4), and sound ontology (CIM layer) will be used to identify features of that event or frequency (see Step 2 in Figure 4). For the analysis phase, the sound ontology (CIM layer) will also be used, but in this case to determine acoustical parameters in order to amplify them (or decrease them, if necessary), and make them hearable if they are over or under human hearing thresholds (see Step 2 in Figure 4). The decision-making will use the SOA capabilities ontology (PIM layer) to determine the services according to the target, which means moving loudspeakers, rotating them or changing their settings to create an immersive sound environment for the user (see Step 3 in Figure 4). Some of these services might need to use the data analytics ontology (PIM layer) for specialized procedures of the sound (see Step 4 in Figure 4).

4. Application scenarios

To test the ontologies with the autonomic cycles, this paper describes three case studies:

4.1 Smart workplace (SWP)

As explained in Ref. [26], an SWP implies the provision of a workplace infrastructure that empowers employees through self-regulation, collaboration and communication, promoting a strong environmental ethic, and sustains organizational agility, by adopting and implementing new technology platforms.

In this SE, the autonomic cycle GAM can help to improve the experience of workers in their offices, especially in open-plan offices. According to Ref. [27], irrelevant speech in open-plan offices has been reported as the main distraction, causing performance decrements for workers.

In this case, it is important to identify what is considered as irrelevant speech and what should be enhanced. GAM can obtain the acoustic features from the entire workplace using the sensor, measurement and location classes from the acoustic ontology in the CIM layer (see Step 1 in Figure 5), in order to carry out the sound classification with the sound ontology to differentiate voices from other sound events (see Step 2 in Figure 5).

At this point, the system needs to consider every sound event as a unit – which means, a door closing, a book falling, steps, laughs, coughs and every other isolated sound that can be generated. Mostly, in a space with 16 workers divided in four groups of four, the system could help with the absorption of the reflections spreading from the other groups; the separations between the tables of each group of four should help with the blockage of the acoustic waves to avoid the transmission to the other workspace; the system can also determine which waves should be covered.

It will use the controllers and driven entities classes from the autonomic computing taxonomy of the PIM layer for the decision-making for absorption, blocking and cover (see Step 3 in Figure 5). As a result, the system will absorb the frequencies generating noise by spreading a noise cancellation phase using roof-installed speakers, it will identify the acoustic waves that are being blocked by screen barriers, and it will cover individual sound events by masking them, when possible, using the same speakers for the noise canceling.

Competency questions are user-oriented questions to evaluate an ontology [28]. In other words, these are questions that users would want to have answered by querying the ontology. Through the application of competence questions, it is possible to verify the quality of the ontologies used for this case study:

Q1.

Which sound elements should be enhanced?

R1.

In this case study, our ontological approach can infer “Speech”.

Q2.

Which elements will be used for the decision-making?

R2.

In this case study, our ontological approach can infer “Controllers”.

4.2 Ambient-assisted living (AAL)

In the context of AAL [29], it is important that the system can provide information about what is happening in the SE. The ISA autonomic cycle will be able to collaborate in the required tasks by proving the acoustic information from the SE. For example, in a care home with disabled people and elders, a patient in a wheelchair has breathing problems and is often coughing, but at a particular moment, the patient is having an asthma attack. The ISA will use the measurement of the acoustic ontology to get constants in the sound field, such as coughs and the sound of the wheelchair (see Step 1 in Figure 6), and the sound ontology from the CIM layer will be used to obtain sound information to identify speech if the patient is calling for help (see Step 2 in Figure 6).

Besides the individual sound events taking place, it is also important to consider that if speech is identified, the emotion recognition using sounds should be taking into account: anger, disgust, fear, happiness, sadness, surprise [30].

From the PIM layer, the system will use the data analytic to do the monitoring of the sound events that alter the continuum in the SE, taking into account the emotional state of the patient, which in this case is fear (see Step 3 in Figure 6). With all this information, the system will use the web services from SOA capabilities ontology for the decision-making, which should send a warning to a person in charge or alert doctors or nurses (see Step 4 in Figure 6).

The competence questions for this case study are:

Q1.

Which element should be identified at first?

R1.

In this case study, our ontological approach can infer “Speech”.

Q2.

Which services will be used for the decision-making?

R2.

In this case study, our ontological approach can infer “Web services”.

4.3 Smart cities (SC)

According to Ref. [31], in smart cities, it is imperative to exploit the wealth of information and knowledge generated in them by integrating the Information and Communications Technologies (ICT). There are different areas of applications of smart cities; such as government, education, health, buildings, mobility, among others [3, 32]. In this case, the focus will be on the environment and the acoustic impact on it, in terms of how acoustic signals could help governments to take actions against natural disasters such as earthquakes. Considering that vibrations before an earthquake can be detected on short notice by special devices, the ASP autonomic cycle will use the location from the CIM layer and the acoustic ontology to locate vibration sources and their distance (see Step 1 in Figure 7), as well as the sound ontology to identify acoustic features and parameters in the vibrations that will allow perceiving sounds that are outside the human threshold (see Step 2 in Figure 7).

These tasks of analysis of the sound previously indicated that this cycle would carry out, would allow determining the characteristics of the tremor and, in this way, eventually foresee preventive actions, as well as prepare future actions in the face of the possible natural disaster that is approaching (earthquake). Now, these are very specific, specialized sound processing tasks that should be invoked by the cycle. Thus, the ontologies to infer these needs and, specifically, the analysis tasks and techniques required according to the context.

Thus, for specialized topics/tasks, the Web Services of the SOA capabilities ontology from the PIM layer (see Step 3 in Figure 7), and the data analytics ontology from the PIM layer are required to determine the techniques needed to implement actions against events that could cause damage – such as a warning recommending evacuations of specific areas or indicating the nearest safe place (see Step 4 in Figure 7).

For this case study, the competence questions are:

Q1.

Which source feature should be identified at first?

R1.

In this case study, our ontological approach can infer “Location”.

Q2.

How can a final recommendation be offered?

R2.

In this case study, our ontological approach can infer “Techniques/Data Analytics”.

5. Comparisons with previous works

In order to compare ontologies used in previous works [10–12] to this article, there is a set of criteria: (1) they can use specialized tasks for sound analysis, (2) they allow the dynamic integration of audio services, (3) they consider the acoustic information from the context, (4) they can be used in diverse real-life contexts and (5) they can be used in SEs. Table 1 shows the comparison of the works.

All these works use specialized tasks to perform the sound analysis required in each project. It is what these works have in common. The works [10, 11] show an absence of dynamic integration of audio services to develop their proposal, but work [12] integrates audio services in its experimental setup. Gemmeke et al. [10] are the only one that takes into account the acoustic information from the context in which it is used, while works [11, 12] focus on the audio itself. Works [11, 12] describe experimentations considering real-life contexts, while work [10] introduces this idea as future work. None of these works propose any use in SEs.

ReM-AM meets all these criteria, allowing the integration of specialized tasks for sound analysis (PIM layer), proposing a dynamic integration of audio services (CIM and PIM layers), using as much acoustic information from the environment as possible (see Section 4) based on the context-awareness principles of the AmICL middleware [3, 33], having as a priority its application in different real-life contexts and SEs (see Section 4).

6. Conclusions

This work carries out an extension of the ReM-AM middleware based on the ODA paradigm for acoustic management in SEs. The acoustic management and the ReM-AM middleware deployment are integrated by ontologies using the CIM and PIM layers of the ODA paradigm. The CIM layer contains the domain information from a sound ontology and an acoustic ontology; and the PIM layer contains ontologies for the computational implementation of the middleware: SOA Capabilities, Data Analytic, and Autonomic Computing.

In the case studies, it is possible to observe the flexibility that the ReM-AM middleware based on the ODA paradigm has by being aware of different contexts and acquiring information of each one of them, using this information to adapt itself to the environment and improve it using the autonomic cycles. To achieve this, the middleware integrates the classes and relations in its ontologies in the autonomic cycles.

Future works will develop algorithms for classification and analysis of sound events to help with emotion recognition not only from speech but also from random and separate sound events. Also, other works will define the implementation requirements and the real context modeling requirements to develop a real prototype. Some aspects to consider in this future implementation are the sensing mechanisms and their relationship with the elements of the context described in the ontologies of the CIM layer; and the software development mechanisms and deployment platforms and their relationships with the elements described in the ontologies of the PIM layer. Thus, to make acoustic management feasible, there must be a coherence between the SE and the elements described in the ontological framework, but the ontologies are flexible enough to instantiate the elements present in the SE.

Figures

Ontological framework of the ReM-AM

Figure 1

Ontological framework of the ReM-AM

Ontologies for the GAM autonomic cycle

Figure 2

Ontologies for the GAM autonomic cycle

Ontologies for the ISA autonomic cycle

Figure 3

Ontologies for the ISA autonomic cycle

Ontologies for the autonomic ASP cycle

Figure 4

Ontologies for the autonomic ASP cycle

Interaction diagram for GAM in SWP

Figure 5

Interaction diagram for GAM in SWP

Interaction diagram for ISA in AAL

Figure 6

Interaction diagram for ISA in AAL

Interaction diagram for ASP in SC

Figure 7

Interaction diagram for ASP in SC

Comparison to other works

Work(1)(2)(3)(4)(5)
[10]X X
[11]X X
[12]XX X
ReM-AMXXXXX

References

1.Santiago G, Aguilar J. Ontology driven architecture for acoustic management. In: Proc. Internoise 48. Madrid; 2019.

2.Sánchez M, Aguilar J, Cordero J, Valdiviezo-Díaz P, Barba-Guamán L, Chamba-Eras L. Cloud computing in smart educational environments: application in learning analytics as service. In: Rocha Á, Correia A, Adeli H, Reis L, Mendonça Teixeira M, (Eds). New advances in information systems and Technologies. Advances in intelligent systems and computing. 2016; 444: 993-1002.

3.Jovanović D, Milovanov S, Ruskovski I, Govedarica M, Sladić D, Radulović A, Pajić V. Building virtual 3D city model for Smart Cities applications: a case study on campus area of the University of Novi Sad. ISPRS Int J Geo-Information. 2020; 9(8): 476.

4.Santiago G, Aguilar J, Chávez D. ReM-AM: reflective middleware for acoustic management in intelligent environments. In: Proc. XLIII Conferencia Latinoamericana de Informática (CLEI), 2017.

5.Vizcarrondo J, Aguilar J, Exposito E, Subias A. ARMISCOM: autonomic reflective middleware for management service composition. In: Proc. Global information infrastructure and networking symposium; 2012.

6.Vizcarrondo J, Aguilar J, Exposito E, Subias A. MAPE-K as a service-oriented architecture. IEEE Latin Am Trans. 2017; 15(6): 1163-75.

7.Santiago G, Aguilar J. Acoustic science in intelligent environments. Latin Am J Comput. 2017; 4(1): 27-36.

8.Santiago G, Aguilar J. Integration of ReM-AM in smart environments. WSEAS Trans Comput. 2019; 18: 97-100.

9.Aguilar J, Portilla O. Framework Basado en ODA para la Descripción y Composición de Servicios Web Semánticos (FODAS-WS). Latin American Journal of Computing. 2015; 2(2): 15-24.

10.Gemmeke JF, Ellis DP, Freedman D, Jansen A, Lawrence W, Moore RC, Ritter M. Audio set: an ontology and human-labeled dataset for audio events. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); 2017: 776-80.

11.Jiménez A, Elizalde B, Raj B. Sound event classification using ontology-based neural networks. In: Proceedings of the Annual Conference on Neural Information Processing Systems; 2018.

12.Sun Y, Ghaffarzadegan S. An ontology-aware framework for audio event classification. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); 2020: 321-5.

13.Méndez-Martínez J. Sound ontology and the Brentano-Husserl analysis of the consciousness of time. Horizon; 2020: 184-215.

14.Alirezaie M, Hammar K, Blomqvist E. SmartEnv as a network of ontology patterns. Semantic Web. 2018; 9(6): 903-18.

15.Ning H, Shi F, Zhu T, Li Q, Chen L. A novel ontology consistent with acknowledged standards in smart homes. Comput Network. 2019; 148: 101-7.

16Shahzad S, Ahmed D, Naqvi M, Mushtaq M, Iqbal M, Munir F. Ontology driven smart health service integration. Comp Meth Programs Biomed. 2021; 207. Available from: https://www.sciencedirect.com/science/article/abs/pii/S0169260721002200.

17.Iqbal MW, Ch NA, Shahzad SK, Naqvi MR, Khan BA, Ali Z. User context ontology for adaptive mobile-phone interfaces. IEEE Access. 2021; 9: 96751-62.

18.Stoilos G, Geleta D, Shamdasani J, Khodadadi M. A novel approach and practical algorithms for ontology integration. In: International Semantic Web Conference, Cham. Springer; 2018: 458-76.

19.Aguilar J, Jerez M, Rodríguez T. CAMeOnto: context awareness meta ontology modeling. Applied Computing and Informatics. 2018; 14(2): 202-13.

20.Gomez-Perez A, Suárez-Figueroa M. NeOn methodology for building ontology networks: a scenario-based methodology. In: International Conference on Software, Services & Semantic technologies; 2009: 160-7.

21.Reinten J, Braat-Eggen P, Hornikx M, Kort H, Kohlrausch A. The indoor sound environment and human task performance: a literature review on the role of room acoustics. Build Environ. 2017; 123: 315-32.

22.Espinoza-Arias P, Poveda-Villalón M, Corcho O. Using LOT methodology to develop a noise pollution ontology: a Spanish use case. J Ambient Intelligence Humanized Comput. 2020; 11: 4557-68.

23.Vasyl L, Vysotska V, Veres O, Brodyak O, Oryshchyn O. Big Data analytics ontology. Technol Audit Prod Reserves. 2017; 1: 16-27.

24.Bandara M, Behnaz A, Rabhi F, Demirors O. From requirements to data analytics process: an ontology-based approach. In: International Conference on Business Process Management; 2018: 543-52.

25.Sánchez M, Exposito E, Aguilar J. Industry 4.0: survey from a system integration perspective. Int J Computer Integrated Manuf. 2020; 33(11).

26.Brougham D, Haar J. Smart technology, artificial intelligence, robotics, and algorithms (STARA): employees' perceptions of our future workplace. J Management Organ. 2018; 24(2): 239-57.

27.Di Blasio S, Shtrepi L, Puglisi G, Astolfi A. A cross-sectional survey on the impact of irrelevant speech noise on annoyance, mental health and well-being, performance and occupants' behavior in shared and open-plan offices. Int J Environ Res Public Health. 2019; 16(2): 280.

28.Bandeira J, Bittencourt II, Espinheira P, Isotani S. FOCA: a methodology for ontology evaluation. Cornell University, Tech Rep. 2016. [Online]. Available from: http://arxiv.org/abs/1612.03353.

29.Victor Hugo C, Hareesha K. IoT in healthcare and ambient assisted living. In: Marques G, Bhoi AK (Eds). Springer; 2021.

30.Zhang J, Yin Z, Chen P, Nichele S. Emotion recognition using multi-modal data and machine learning techniques: a tutorial and review. Inf Fusion. 2020; 59: 103-26.

31.Aguilar J, Sanchez M, Jerez M, Mendonca M. An extension of the MiSCi middleware for smart cities based on fog computing. I. Management Association. In: Smart cities and smart spaces: concepts, methodologies, tools, and applications. IGI Global; 2019. p. 778-98.

32.Saleem S, Zeebaree S, Zeebaree D, Abdulazeez A. Building smart cities applications based on iot technologies: a review. Technology Rep Kansai Univ. 2020; 62(3): 1083-92.

33.Aguilar J, Jerez M, Exposito E, Villemur T. CARMiCLOC: context awareness middleware in cloud computing. In: Latin American Computing Conference (CLEI); 2015: 532-41.

Corresponding author

Jose Aguilar can be contacted at: aguilar@ula.ve

Related articles