Overview
In this companion page, we provide the software artifacts and technical reports of our paper "Dealing with beliefs in domain models" [1].
Example
To evaluate the expressiveness and usability of our proposal, as well as to illustrate it, we have developed several cases:
1) Controling the AC system of a room
The rooms in a smart house are equipped with temperature and humidity sensors, which are used by a home domotic controller to turn on or off the AC systems in the rooms. As with any other physical measurement, the sensor readings are subject to measurement uncertainty, i.e. they have some inaccuracy, and therefore the decisions of the AC controller are also imprecise. In addition, the occupants of the house hold some beliefs about the readings, too, because they may not trust the sensors. For example, one of the occupant knows that the kitchen temperature sensor is near the oven, so he is always unsure about the decisions made by the AC controller. This case study shows how different types of uncertainty are represented in the domain model of the smart house system, how agents can express their beliefs about the functioning of the AC controller, and how these combine with measurement uncertainties caused by the sensors inaccuracies. Furthermore, we show how to merge the individual opinions of the different occupants of the house about the air conditioning system of a room, to reach agreements about turning the air conditioning system on or off.
2) Deciding whether to go out jogging based on the weather forecast
Suppose that three agents: Ada, Bob and Cam, live in Málaga and want to go jogging. They use weather services to know about the outside temperature and expected chances of rain for the next day. Let us assume that the weather forecast predicts the probability of rain in Malaga the following day is 60%. However, Bob does not trust this information because he knows that the predictions are made from data taken from the airport, whose conditions are different from those in the city center. Som he also adds some subjective uncertainty to the weather forecast predictions. The other two agents may want to do the same. Of course, the subjective uncertainty assigned to the same information by different agents may vary, depending on their personal history, experiences and beliefs (e.g., their individual level of trust in a data provider or in the sources). This example illustrates the types of systems in which their users (e.g., the agents) receive information from external data sources that are not completely reliable or may contain inaccuracies and therefore have an associated uncertainty which is normally expressed as a probability that represents the degree of confidence on the value provided by the source. In addition to this (objective) degree of confidence, users of these systems may also associate some subjective. uncertainty to that of the source.
3) A Digital Humanities system
This case study is taken from the DICTOMAGRED project [6] in the digital humanities domain, which analyses historical sources (including, e.g., oral testimonies and legal documents), most of them in Arabic. Such sources contain geographical references describing routes through different areas in the Maghreb, their place names (toponyms), and other related historical events. The main goal of the project is "to provide a software tool for humanities specialists to retrieve information about the location of toponyms in North Africa as they appear in historical sources of medieval and modern times". This project has been extensively used in other proposals [7, 8, 9] that employ domain models to represent the inherent vagueness of the type of information managed by the project. For example, some of the sources are not reliable, provide imprecise or incomplete descriptions and details, or may have been altered over the years, with different versions of the same facts (as with oral sources). Our goal with this case is to illustrate how our proposal enables the specification of the vagueness associated to DICTOMAGRED models and show how we can even be more expressive than these previous proposals by allowing different users to express different opinions and analyzing how these disparate opinions could be reconciled.
4) Dealing beliefs associated to the inherent uncertainty of Machine Learning applications
This case shows how an AI-empowered recommender system is the source of objective uncertainty (i.e., the confidence that a machine learning algorithm provides together with a prediction). We show how beliefs are expressed on the predictions of these AI components. In particular, we show how separate friends that are going to travel together, agree on the hotel in which they are going to stay considering their individual trust on the travel agencies and hotel booking services offering hotel rooms, which are using machine learning algorithms to provide suggestions.
5) Videogame reviews on Steam
Considering videogame digital stores, we have developed an application to track opinions in user reviews in Steam (a videogame digital distribution service). In Steam, users describe their personal opinions and experiences playing a game. This offer an interesting dataset from the point of view of opinion mining and sentiment analysis, which has been analyzed in the software engineering literature (e.g., [10]). User reviews may show conflicting opinions regarding a particular game. We discuss how our uncertainty modeling approach can be used to study the opinions of several users about a particular videogame.
6) Test Case Generation in Software Modeling
In this example, we present a scenario where Classifying Terms (CTs) are used to automatically generate test cases. In our approach, the stakeholders add beliefs to the generated test cases (i.e., model instances) and reason about them before they are accepted and used for testing purposes or discarded.
Further Project Resources
You can find the MagicDraw plugin that we have developed to support our proposal, together with a user guide, its source code, the developer documentation, and all our examples in our Git repository.
References
[1] Loli Burgueño, Paula Muñoz, Robert Clarisó, Jordi Cabot, Sébastien Gérard, and Antonio Vallecillo. "Dealing with beliefs in domain models." Submitted, 2021.
[2] Paula Muñoz, Loli Burgueño, Victor Ortiz, Antonio Vallecillo. "Extending OCL with Subjective Logic." Journal of Object Technology, Vol. 19, No. 3, Oct 2020. doi:10.5381/jot.2020.19.3.a1
[3] Audun Jøsang. "Subjective Logic – A Formalism for Reasoning UnderUncertainty." Springer, 2016. doi:10.1007/978-3-319-42337-1.
[4] Manuel F. Bertoa, Loli Burgueño, Nathalie Moreno, and Antonio Vallecillo. "Incorporating measurement uncertainty into OCL/UML primitive datatypes." Software and Systems Modeling 19(5):1163-1189, 2020. doi: 10.1007/s10270-019-00741-0.
[5] Javier Troya, Nathalie Moreno, Manuel F. Bertoa, and Antonio Vallecillo. "Representing Uncertainty in Software Models - A survey." Software and Systems Modeling, 2021. doi: 10.1007/s10270-020-00842-1.
[6] Miguel Angel Manzano, Helena de Felipe-Rodríguez, and Laura Gago-Gómez. 2018. DICTOMAGRED: Diccionario de Toponimia Magrebí. "https://dictomagred.usal.es/.".
[7] Patricia Martín-Rodilla and Cesar Gonzalez-Perez. 2018. Representing Imprecise and Uncertain Knowledge in Digital Humanities: A Theoretical Framework and ConML Implementation with a Real Case Study. In Proc. of TEEM’18. ACM, 863–871. DOI: https://doi.org/10.1145/3284179.3284318
[8] Patricia Martín-Rodilla and Cesar Gonzalez-Perez. 2019. Conceptualization and Non-Relational Implementation of Ontological and Epistemic Vagueness of Information in Digital Humanities. Informatics 6, 2 (2019), 20. DOI: https://doi.org/10.3390/informatics6020020
[9] Patricia Martín-Rodilla, Martin Pereira-Fariña, and Cesar González-Perez. 2019. Qualifying and Quantifying Uncertainty in Digital Humanities: A Fuzzy-Logic Approach. In Proc. of TEEM’19. ACM, 788–794. DOI: https://doi.org/10.1145/3362789.3362833
[10] Dayi Lin, Cor-Paul Bezemer, Ying Zou, and Ahmed E Hassan. 2019. An empirical study of game reviews on the Steam platform. Empirical Software Engineering, 24, 1 (2019), 170–207