فهرست مطالب

Big Data Analysis and Computing Visions
Volume:5 Issue: 1, Mar 2025
- تاریخ انتشار: 1403/12/11
- تعداد عناوین: 6
-
Pages 1-10Image and video completion are essential tasks in the field of image and video processing, often used for restoring damaged regions in images and video frames. The primary challenge in these tasks is to complete them in such a way that they do not introduce noticeable artifacts or inconsistencies to the viewer. While image completion focuses on filling in missing parts in a static context, video completion requires additional considerations due to the temporal dimension. The motion of objects and the preservation of temporal consistency are critical factors in video completion. This research proposes a novel method for image and video completion based on Neutrosophic theory, which handles uncertainty in both spatial and intensity domains. Neutrosophy is utilized to interpret the indeterminacy present in images, allowing for more accurate segmentation and better handling of incomplete data. The proposed method first segments the image using Neutrosophic-based segmentation and then uses the segmented information to guide the completion of missing regions. For video completion, a two-step approach is introduced that separates static backgrounds from moving objects. The background is reconstructed using image completion based on Neutrosophic-based segmentation, and the foreground is completed by identifying appropriate data that best match the missing parts; this data is chosen using a contour-based method, which this method applies neutrosophic sets to get to the most suitable data. The novelty of the approach lies in several key contributions: 1) the use of Neutrosophic theory to handle spatial and intensity uncertainties, 2) a Neutrosophic-based similarity measure for image segmentation, 3) a new metric for finding the most suitable patch for hole filling, and 4) a novel method for preserving boundaries and uniformity in video completion, particularly in the presence of moving objects. Experimental results demonstrate the effectiveness of the proposed methods, with improved visual quality and reduced inconsistencies compared to previous state-of-the-art methods. However, challenges remain in applying the method to highly detailed images with many classes and handling dynamic backgrounds.Keywords: Video Completion, Video Inpainting, Segmentation Hole Filling, Neutrosophic Sets
-
Pages 11-23In the light of growing frequency of natural disasters, social networking sites are now used for polling perception and evaluating governmental performance. The aim of this study is to examine the effects of negative and positive social media discussions to government responses with the floods disaster of 2010 in Pakistan. This study, being a sentiment analysis of tweets that involves the Pakistan Flood 2010 and Disaster Relief hashtags only, classified public responses as positive, negative, or neutral. The sentiments are transformed into actionable insights using the Enhanced Hybrid Dark Social Analytical Framework (EHDSAF) technique across different areas. The study advances knowledge of how public sentiment shapes government responses by showing that negativity correlates with slower response and revised policies. The majority of the tweets analyzed were neutral (45%), followed by positive (35%), and negative (20%). Negative sentiment tends to be concentrated during the peak crisis period. Higher negative sentiment, particularly in big cities correlates with more immediate and substantial government interventions, indicated by a strong correlation of 0.65. The Pearson correlation coefficient calculated as 0.68, suggests a strong relationship between public sentiment and response. The study therefore establishes social media as an accountability forum that provides real-time feedback to government agencies in the course of calamity management. This paper highlights the effectiveness of using sentiment analysis to update the approach by which disasters are responded to, as well as improve the perception of the public towards government endeavors.Keywords: EHDSAF, Twitter, Dark Social Data, API, Sentiment Analysis, GOP, Machine Learning, VADER
-
Pages 24-36Pixel transitions are critical in image processing, largely depending on interpolation methods to ensure smoothness and clarity. This work focuses on two widely used image interpolation techniques: nearest neighbor interpolation and bilinear interpolation, both implemented using integrated software code. Our methodology enables each interpolation technique to be applied independently, allowing for a direct comparison of their performance. To achieve a thorough evaluation of each interpolation method, we utilize a set of essential quality assessment metrics: Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), Grayscale Analysis, and Mean Squared Error (MSE). These metrics were selected to provide a balanced assessment of image sharpness, structural accuracy, and overall visual quality. The results of this study offer a detailed analysis of the strengths and limitations of each interpolation technique. These findings are intended to assist researchers and practitioners in selecting the most suitable interpolation method for their specific requirements in the image processing domain. By providing a comparative framework, this work contributes to the field by enhancing methods for assessing and optimizing image quality in digital imaging applications.Keywords: Image Processing, Bilinear Interpolation, Nearest Neighbor Interpolation, Image Optimization
-
Pages 37-51The main aim of this paper is to analyze the application of LSS tools to enhance efficiency and product quality within the region's textile sector. Specifically, the paper's objective is three-fold: to discuss how Lean and Six Sigma may be implemented separately, to discuss how these two improvement methodologies can be integrated, and to develop an integrated framework of Lean Six Sigma (LSS). The current research being conducted includes the use of journal articles and the use of the actual method. This gives an overview of previous frameworks and approaches to Lean and Six Sigma integration and establishes a new LSS Framework using DMAIC methodology. This study also comprises the quantitative and qualitative data collection from the survey and process evaluation in the Ready-Made Garment (RMG) factory. The research presented here outlines a comprehensive LSS strategy that would drastically cut down on defects and increase efficiency in the textile industry. According to the case, using LSS, the rework's Sigma level improved from 2.7 to 3.0, proving fewer defects. This paper introduces a novel LSS framework tailored for the textile industry, especially Small and Medium-sized Enterprises (SMEs). Building on previous research, it identifies specific challenges faced by the textile sector and recommends adopting digital technologies and environmental considerations, including Green LSS. The study offers valuable insights for practitioners aiming to implement continuous improvement approaches, particularly in SMEs, by integrating digital tools.Keywords: Lean Six Sigma, DMAIC, Industry 4.0, Process Optimization, Operational Efficiency
-
Pages 52-73Soft sets provide a robust mathematical framework for addressing uncertainty, offering innovative solutions for problems involving parametric data. Soft set operations are core concepts within soft set theory. In this paper, we introduce a novel soft product for soft sets, termed the soft star-product, along with its complete algebraic properties concerning various types of soft equalities and subsets. By analyzing the distributions of the soft star product over different soft set operations, we also explore the relationships between this product and other soft set operations. The paper concludes with an example that illustrates the method's effectiveness in various applications, utilizing the int-uni operator and the int-uni decision function within the soft star-product for the int-uni decision-making method, which identifies an optimal set of elements from available alternatives. This work is a valuable contribution to the soft set literature, as the theoretical foundation of soft computing methods is based on rigorous mathematical principles.Keywords: Soft Set, Soft Star-Product, Soft Subset, Soft Equal Relations, Decision-Making
-
Pages 74-93This paper addresses a practical but complicated version of Just in Time (JIT) problem in which a set of available jobs with known processing times and due dates are processed on a single machine and delivered in batches of arbitrary size. A new mathematical model is developed to minimize the non-convex sum of earliness-tardiness and delivery costs criteria. Due to the limitations imposed by the large size complexity of the proposed model and its nonlinear nature, we use the recently proposed league championship algorithm (LCA) to solve arbitrary test problem instances of the problem on hand. Since LCA works in continuous space, we use several representational schemes to map the solutions generated by LCA to discrete space and compare the output of the algorithm under each mapping scenario. To measure how effective LCA is in comparison with the mathematical modeling approach and other heuristic methods, we use the Lingo system and a discrete version of the Imperialistic Competitive Algorithm (ICA) as the comparator algorithms, respectively. Experimental results show that LCA is strongly efficient and dominates the comparator algorithms. At the same time, the time saved by LCA to report the final output is significant, which recommends the use of this algorithm for other practical optimization problems.Keywords: Scheduling, Earliness Tardiness Penalties, Batch Delivery, League Championship Algorithm, Mathematical Programming