فهرست مطالب
پژوهشنامه پردازش و مدیریت اطلاعات
سال چهلم شماره 3 (پیاپی 122، Winter 2025)
- ویژه نامه انگلیسی 3
- تاریخ انتشار: 1403/12/01
- تعداد عناوین: 11
-
-
Pages 1-20
Data is considered as the most important element in open banking processes and services. Therefore, it is necessary to pay attention to various aspects of the quality of this data in order to provide appropriate and expected services to customers. Accordingly, in this research, various dimensions that represent different aspects of data quality were investigated in the field of open banking. This research has been done in two main steps, Delphi method and pairwise comparisons method. In the first step, based on the Delphi method, various dimensions of data quality in open banking were extracted, and in the next step using the pairwise comparisons method, the importance of each of these dimensions was assessed relative to each other, and the most important dimensions were determined. Based on the results obtained from these two methods, the importance of eleven dimensions of data quality in this field was determined. The best overall weighted averages were related to dimensions accuracy, accessibility, relevancy, timeliness, consistency, security, interpretability, reputation, believability, ease of understanding, and value-added, respectively. Banks and fintechs providing open banking services can consider these dimensions in assessing the quality of their data in order to provide better services.
Keywords: Open Banking, Data Quality, Delphi Method, Pairwise Comparisons -
Pages 21-40
In the contemporary dynamic business environment, the dependability of process mining algorithms is intricately tied to the quality of event logs, often marred by data challenges stemming from human involvement in business processes. This study introduces a novel approach that amalgamates insights from prior works with unsupervised techniques, specifically Principal Component Analysis (PCA), to elevate the precision and reliability of event log representations. Executed through Python and the pm4py library, the methodology is applied to real event logs. The adoption of Petri nets for process representation aligns with systematic approaches advocated by earlier studies, enhancing transparency and interpretability. Results demonstrate the method's efficacy through enhanced metrics such as Fitness, Precision, and F-Measure, accompanied by visualizations elucidating the optimal number of principal components. This study offers a comprehensive and practical solution, bridging gaps in existing methodologies, and its integration of multiple strategies, particularly PCA, showcases versatility in optimizing process mining analyses. The consistent improvements observed underscore the method's potential across diverse business contexts, making it accessible and pertinent for practitioners engaged in real-world business processes. Overall, this research contributes an innovative approach to improve event log quality, thereby advancing the field of process mining with practical implications for organizational decision-making and process optimization.
Keywords: Process Mining, Quality Metrics, Business Process Model, Event Log -
Pages 41-79
Decision-making processes are significantly influenced by a myriad of factors, with data quality emerging as a crucial determinant. Despite widespread awareness of the detrimental effects of poor-quality data on decisions, organizations grapple with persistent challenges due to the sheer volume of data within their systems. Existing literature advocates for providing Data Quality Metadata (DQM) to aid decision-makers in communicating data quality levels. However, concerns about potential cognitive overload induced by DQM may hinder decision-makers and impact outcomes negatively. To address this concern, we conducted an experimental exploration into the impact of DQM on decision outcomes. Our study aimed to identify specific groups of decision-makers benefiting from DQM and uncover factors influencing its usage. Statistical analyses revealed that decision-makers with an elevated awareness of data quality exhibited enhanced DQM utilization, resulting in higher decision accuracy. Nevertheless, a trade-off was observed as the efficiency of decision-makers suffered when employing DQM. We propose that the positive impact of incorporating DQM on decision outcomes is contingent on characteristics such as a high level of data quality knowledge. However, we acknowledge that the inference of this positive impact could be more transparent and thoroughly explained. Our findings caution against a blanket inclusion of DQM in data warehouses, emphasizing the need for tailored investigations into its utility and impact within specific organizational settings.
Keywords: Data Quality Metadata (DQM), Decision Support Systems, Data Quality, Decision Strategy -
Pages 81-105
Record linkage is a vital process for consolidating data from different sources, particularly in Persian records where diverse data structures and formats present challenges. To tackle these complexities, an expert system with decision tree algorithms is crucial for ensuring precise record linkage and data aggregation. Adaptation operations are created based on predefined rules by incorporating decision trees into an expert system framework, simplifying the aggregation of disparate data sources. This method not only surpasses traditional approaches like IF-THEN rules in effectiveness and ease of use but also improves accessibility for non-technical users due to its intuitive nature. Integrating probabilistic record linkage results into the decision tree model within the expert system automates the linkage process, allowing users to customize string metrics and thresholds for optimal outcomes. The model's accuracy rate of over 95% on test data highlights its effectiveness in predicting and adjusting to data variations, confirming its reliability in various record linkage scenarios. The innovative utilization of machine learning decision trees alongside probabilistic record linkage in an expert system represents a significant advancement in the field, providing a robust solution for data aggregation in intricate environments and large-scale projects involving Persian records. Combining decision tree algorithms and probabilistic record linkage within an expert system offers a powerful tool for handling complex data integration tasks. This approach not only streamlines the process of consolidating diverse data sources but also enhances the accuracy and efficiency of record linkage operations By leveraging machine learning techniques and automated decision-making processes, organizations can achieve significant improvements in data quality and consistency, paving the way for more reliable and insightful analytical results in implementing statistical registers. In conclusion, integrating decision trees and probabilistic record linkage in an expert system represents a cutting-edge solution for addressing data aggregation challenges in Persian records and beyond.
Keywords: Machine Learning, Record Linkage, Decision Tree, Performance Evaluation -
Pages 107-142
In the current business environment and the post-corona era, quality information builds the foundation of knowledge sharing, and facilitates business process management, and all of these play a fundamental role in the innovative performance of small and medium enterprises(SMEs). Therefore, the purpose of this research is to investigate the effect of information quality on the innovative performance of SMEs with regard to the mediating role of knowledge sharing and business process management.The research method is applied in terms of the goal and descriptive-survey in terms of the method. The statistical population of the research is 460 SMEs in Kerman, of which 210 companies were selected as a statistical sample based on Morgan's table. The sampling method is random sampling, and the research tool is a questionnaire whose validity and reliability were confirmed. Finally, 420 questionnaires were distributed among managers and vice presidents of companies.The data was analyzed with SPSS 26 and SmartPLS 3 software. The results of the research indicate that knowledge sharing and business process management play a mediating role in influencing the quality of information on the innovative performance of SMEs.Information quality has an impact on knowledge sharing, business process management and innovative performance of SMEs. Knowledge sharing and business process management affect the innovative performance of SMEs. SMEs should continuously improve the quality of company information, and by using this information, and through knowledge sharing and business process management, improve innovative performance.
Keywords: Information Quality, Innovative Performance, Knowledge Sharing, Business Process Management, Small, Medium Enterprises(Smes) -
Pages 143-166
In today's data-centric business landscape, characterized by the omnipresence of Advanced Business Intelligence and Data Science technologies, the practice of Process Mining takes center stage in Business Process Management. This study addresses the critical challenge of ensuring the quality of event logs, which serve as the foundational data source for Process Mining. Event logs, derived from interactions among process participants and information systems, offer profound insights into the authentic behavior of business processes, reflecting the organizational rules, procedures, norms, and culture. However, the quality of these event logs is often hindered by interactions between different actors and systems. In response, our research introduces a systematic approach, leveraging Python and the pm4py library for data analysis, employing trace filtering techniques, and utilizing Petri nets for process model representation. This psper proposes a methodology demonstrating a notable enhancement in the quality metrics of extracted subprocesses through trace filtering. Comparative analyses between original logs and filtered logs reveal improvements in fitness, precision, generalization, and simplicity, underscoring the practical significance of trace filtering in refining complex process models. These findings provide actionable insights for practitioners and researchers engaged in process mining and modeling, emphasizing the importance of data quality in deriving accurate and reliable business process insights.
Keywords: : Process Quality, Quality Metrics, Business Process Model, Event Log -
Pages 167-189
Business Process Management (BPM) drives corporate success by ensuring processes are effective and efficient. In recent decades, digital transformation has become increasingly important for companies. To impact a business process, it is essential to capture and characterize it clearly. A detailed description of a business process can serve as a foundation for designing IT systems, ensuring data quality, setting up performance measurement, and enacting processes using a BPMS, among other applications. Today, many Iranian companies are also interested in measuring their BPM. Therefore, the current research has ranked BPM measurement methods with digital transformation approach using the COCOSO hierarchical analysis technique (Yazdani et al., 2018) in Semnan industrial town. In this regard, BPM measurement methods and measurement criteria with digital transformation and data quality approach have been extracted from the research literature review. Finally, a sensitivity analysis has been performed to validate the results. The results of this ranking show that BPM measurement models based on comprehensive quality management are the most suitable models.
Keywords: Bpm, Multi-Criteria Decision Models, COCOSO Technique, Digital Transformation -
Pages 191-218
Molten gold ownership transfer holds both historical and economic significance. It is an essential aspect of financial and wealth management practices. Traditional systems for transferring molten gold ownership are inefficient, vulnerable to fraud, and they are plagued lack of transparency. Blockchain technology, with its decentralized, immutable, and transparent nature, offers a promising solution to these challenges. This paper explores the transformative potential of blockchain technology in revolutionizing molten gold ownership transfer. The transfer of molten gold ownership through blockchain provides a safe and transparent way to track and verify the ownership of the gold assets. The proposed model allows the creation of digital tokens that represent physical gold and can then be exchanged on a blockchain platform. By demonstrating the transformative potential of blockchain in molten gold ownership transfer, this paper contributes to the evolving discourse on the intersection of blockchain technology and asset management, setting the stage for a more efficient, secure, transparent, and distributed gold market.
Keywords: Molten Gold, Ownership Transfer, Blockchain Technology, Token, Smart Contracts -
Pages 219-253
The term "cloud computing" has been widely studied and used by major corporations ever since it was originally created. From the point of view of cloud computing, a variety of research topics and viewpoints have been considered, dealt with, and handled. Some examples of these include resource management, cloud security, and energy efficiency, to mention just a few. But, cloud computing is still faced with the significant obstacle of determining how to most effectively schedule tasks and manage available resources. We need effective scheduling strategies to handle these resources due to the size and dynamic resource provisioning of current data centres. The purpose of this work is to provide an overview of the various task scheduling methods that are utilized in the cloud computing environment till date. An attempt has been made to categorize current methods, investigate problems, and identify important problems that are currently present in this area. Our data reveals that 34% of researchers are concentrating on makespan for QoS (Quality of Service) metrics, 17% on cost, 15% on load balancing, 10% on deadline, and 9% on energy usage. Other criteria for the QoS parameter contribute far less than the ones mentioned above. According to this study, the scheduling algorithms that are used by researchers 80% of the time include the genetic algorithm in bio-inspired systems and particle swarm optimization in swarm intelligence. According to the available literature, 70% of the studies have utilized cloudsim as their simulation tool of choice. This paper also highlights a variety of ongoing problems and potential future directions in QoS-driven task scheduling algorithms for use in cloud computing environments.
Keywords: Resource Allocation, Meta-Heuristic, Cloud Computation, Resource Scheduling, Optimization Techniques, Task Scheduling -
Pages 255-285
The principal sometimes makes decisions that cannot solve the problem because the principal does not adequately study the factors that influence and hinder the implementation of decisions before making a decision. The low quality of decisions is because of the lack of data availability or current information related to planning parameters. The unavailability of a decision support system as part of an education management information system that manages the data becomes information for school principals and is an obstacle to the quality of decision-making. The main objective of this research is proposing the decision-making model of school principals. This research is a quantitative study with a questionnaire as a research instrument. The research sample was the head of vocational schools in the Special Region of Yogyakarta and Central Java. The data analysis technique used was the analysis technique used by Structural Equation Modeling (SEM) using SmartPLS software. The results showed that the Decision Information System and the Decision Support System must support a practical decision-making quality. Both systems will have an impact on increasing alternative selection ability and analysis of problem-solving ability. Future research may develop an Android-based principal's decision support system.
Keywords: Decision Quality, Decision Making, School Principal, Information System, Decision Support System -
Pages 287-308
Organizing a large volume of scientific multimedia data requires the use of appropriate indexing methods as one of the processes of information organization. Appropriate methods and algorithms are those that lead to the improvement of various aspects of quality in the process of organizing and retrieving information. For this reason, the purpose of this research is to identify the most important dimensions of data quality in the field of scientific multimedia indexing. In order to achieve this goal, a comparison of different dimensions of data quality has been made based on different criteria and the most important dimensions have been identified using Shannon entropy weighting approach and TOPSIS group ranking method. Also, using the correlation matrix, the intensity and direction of the relationship and correlation between the different dimensions of data quality have been evaluated. Based on the results of the first part of the research, the best ranks (priorities) were related to the data quality dimensions of recall, precision, completeness, appropriate amount of data, accuracy, relevancy, concise 1, consistency, concise 2, interpretability, value-added and accessibility, respectively. The results obtained from the second part of the research showed that the data quality dimensions of interpretability and relevancy had the highest correlation with the most important dimensions, i.e. recall and precision. As one of the implications of this research, it is possible to consider the measurement and evaluation of scientific multimedia data indexing methods based on different aspects of data quality and their importance.
Keywords: Data Quality, Scientific Multimedia Indexing, Prioritization, Ranking, Correlation, Relationship, Keyword Extraction