فهرست مطالب
Journal of Algorithms and Computation
Volume:53 Issue: 2, Dec 2021
 تاریخ انتشار: 1400/10/26
 تعداد عناوین: 12


Pages 132
Linear and homogeneous recurrence equations having polynomial coefficients are said to be holonomic. These equations are useful for proving and discovering combinatorial and hypergeometric identities. Given a field K of characteristic zero, an is a hypergeometric term with respect to K, if the ratio an+1/an is a rational function over K. Two algorithms by Marko Petkovsek (1993) and Mark van Hoeij ˇ (1999) were proposed to compute hypergeometric term solutions of holonomic recurrence equations. The latter algorithm is more efficient and was implemented by its author in the Computer Algebra System (CAS) Maple through the command LREtools[hypergeomsols]. We describe an equally efficient algorithm that ignores some recommendations of van Hoeij’s approach.
Keywords: Holonomic recurrence equations, hypergeometricterms, van Hoeij’s algorithm, Petkovsek’s algorithm, finite sin ˇgularities, Fuchs relations, local types at infinity 
Pages 3345
A popular research topic in Graph Convolutional Networks (GCNs) is to speedup the training time of the network. The main bottleneck in training GCN is the exponentially growing of computations. In ClusterGCN based on this fact that each node and its neighbors are usually grouped in the same cluster, considers the clustering structure of the graph, and expand each node's neighborhood within each cluster when training GCN.The main assumption of ClusterGCN is the weak relation between clusters; which is not correct at all graphs. Here we extend their approach by overlapped clustering, instead of crisp clustering which is used in ClusterGCN. This is achieved by allowing the marginal nodes to contribute to training in more than one cluster. The evaluation of the proposed method is investigated through the experiments on several benchmark datasets.The experimental results show that the proposed method is more efficient than ClusterGCN, in average.
Keywords: Networks, Graph Neural Networks, Clustering, Spectral Clustering 
Pages 4756
Let $G = (V, E)$ be a $(p,q)$ graph.Define begin{equation*}rho =begin{cases}frac{p}{2} ,& text{if $p$ is even}\frac{p1}{2} ,& text{if $p$ is odd}\end{cases}end{equation*}\ and $L = {pm1 ,pm2, pm3 , cdots ,pmrho}$ called the set of labels.noindent Consider a mapping $f : V longrightarrow L$ by assigning different labels in L to the different elements of V when p is even and different labels in L to p1 elements of V and repeating a label for the remaining one vertex when $p$ is odd.The labeling as defined above is said to be a pair difference cordial labeling if for each edge $uv$ of $G$ there exists a labeling $leftf(u)  f(v)right$ such that $leftDelta_{f_1}  Delta_{f_1^c}right leq 1$, where $Delta_{f_1}$ and $Delta_{f_1^c}$ respectively denote the number of edges labeled with $1$ and number of edges not labeled with $1$. A graph $G$ for which there exists a pair difference cordial labeling is called a pair difference cordial graph. In this paper we investigate pair difference cordial labeling behavior of planar grid and mangolian tent graphs.
*The formulas are not displayed.
Keywords: Path, Laddar, Planar grid, Mangolian tent 
Pages 5774
We introduce a new invariant vulnerability parameter named JTightness or J(G) for graphs. As a stability measure, its properties along with comparisons to other parameters of a graph are proposed. We show how it is calculated for complete graphs and cycles. We show that JTightness better fits the properties of vulnerability measures and can be used with more confidence to assess the vulnerability of any classes of graphs.
Keywords: Network Vulnerability, Connectivity, BindingNumber, Scattering Number, Rapture Degree, Integrity, Toughness, Tenacity, JTightness 
Pages 7584
The statistical methods based on cumulative distribution function is a start point for many parametric or nonparametric statistical inferences. However, there are many practical problems that require dealing with observations/parameters that represent inherently imprecise. However, Hesamian and Taheri (2013) was extended a concept of fuzzy cumulative distribution function. Applying a common notion of fuzzy random variables, they extended a vague concept of fuzzy cumulative distribution function. However, the main properties of the proposed method has not yet been considered in fuzzy environment. This paper aims to extend the classical properties of the fuzzy cumulative distribution function in fuzzy environment.
Keywords: Cumulative distribution function, fuzzy randomvariable, fuzzy parameter, ranking method, convergence, divergence to infinity 
Pages 8590
Let S be a set of points in the plane that are in convex position. Let O be a set of simple polygonal obstacles whose vertices are in S. The visibility graph V is(S, O) is the graph which is obtained from the complete graph of S by removing all edges intersecting some obstacle of O. In this paper, we show that there is a plane 5.19 spanner of the visibility graph V is(S, O) of degree at most 6. Moreover, we show that there is a plane 1.88 spanner of the visibility graph V is(S, O). These improve the stretch factor and the maximum degree of the previous results by A. van Renssen and G. Wong (Theoretical Computer Science, 2021) in the context of points in convex position.
Keywords: Plane spanner, Stretch factor, Shortest path, Computational Geometry 
Pages 91111
Confirming the integrity of transmitted sensitive digital content is a significant issue due to the evolution in communication technologies and the accessibility of image processing tools. Watermarking has been a successful method of authentication and integrity verification recently. However, several significant problems remain such as confronting some serious attacks and recovery after higher tampering rates. We propose a hybrid method to enable an image to be recovered successfully after a range of attacks. A blind watermarking approach is adopted which includes fragile authentication but robust recovery references. This is performed by embedding verification code as part of the watermarked data along with key features of the original image into a location that is resistant to the attack. To combat different kinds of attacks, the areas of the image have been investigated to find which area is more likely to be affected in each type of specific attack.
Keywords: Image watermarking, Tamper detection, ImageRecovery, Attack, Tamper Recovery 
Pages 113156
Cloud computing is a highperformance computing environment that can remotely provide services to customers using a payperuse model. The principal challenge in cloud computing is task scheduling, in which tasks must be effectively allocated to resources. The mapping of cloud resources to customer requests (tasks) is a popular Nondeterministic Polynomialtime (NP)Complete problem. Although the task scheduling problem is a multiobjective optimization problem, most task scheduling algorithms cannot provide an effective tradeoff between makespan, resource utilization, and energy consumption. Therefore, this study introduces a Prioritybased task scheduling algorithm using Harris Hawks Optimizer (HHO) which is entitled as PHHO. The proposed algorithm first prioritizes tasks using a hierarchical process based on length and memory. Then, the HHO algorithm is used for optimally assigning tasks to resources. The PHHO algorithm aims to decrease makespan and energy consumption while increasing resource utilization and throughput. To evaluate the effectiveness of the PHHO algorithm, it is compared with other wellknown metaheuristic algorithms such as Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), Whale Optimization Algorithm (WOA), Salp Swarm Algorithm (SSA), and MothFlame Optimization (MFO). The experimental results show the effectiveness of the PHHO algorithm compared to other algorithms in terms of makespan, resource utilization, throughput, and energy consumption.
Keywords: Cloud, Task scheduling, Metaheuristic, Task priority 
Pages 157164
Selecting suitable journals for publishing manuscripts for publication is one of the most essential processes before publishing any manuscript. Finding the relevant journal is a key factor which proves one's work valuable to the entire society. The final output and the performance of one's research is ultimately validated only if the paper is published in a right journal. One of the greatest mistakes that the authors make is submitting their manuscript in an unsuitable journal. The author should also consider all the six parameters such as Scope, Cite Score, Impact factor, Acceptance Rate, Time to first decision and Time to publication. Some authors only consider the acceptance rate and the time to first decision and publication as their main criteria. The author should consider all these parameters while publishing the paper. An algorithm named DEAR is used in the work which can consider all these parameters to find the right journal among the various alternatives. This DEAR method serves as a userfriendly method in selecting the best journal.
Keywords: Manuscript submission, Journal Selection, Journal Finder, Impact Factor, Cite Score, DEAR Algorithm 
Pages 165172
An edge coloring of a digraph D is called a P3rainbow edge coloring if the edges of any directed path of D with length 2 are colored with different colors. It is proved that for a P3rainbow edge coloring of a digraph D, at least dlog2χ(D)e colors are necessary and 2 dlog2χ(D)e} colors are enough. One can determine in linear time if a digraph has a P3rainbow edge coloring with 1 or 2 colors. In this paper, it is proved that determining that a digraph has a P3rainbow edge coloring with 3 colors is an NPcomplete problem even for planar digraphs. Moreover, it is shown that dlog2χ(D)e colors is necessary and sufficient for a P3rainbow edge coloring of a transitive orientation digraph D.
Keywords: vertex equitable labeling, vertex rainbowcoloring, planar digraphs, templatedriven rainbowcoloring, transitive digraph, dichromatic indexgraph 
Pages 173196
On a graph with a negative cost cycle, the shortest path is undefined, but the number of edges of the shortest negative cost cycle could be computed. It is called Negative Cost Girth (NCG). The NCG problem is applied in many optimization issues such as scheduling and model verification. The existing polynomial algorithms suffer from high computation and memory consumption. In this paper, a powerful MapReduce framework implemented to find the NCG of a graph. The proposed algorithm runs in O(log k) parallel time over O(n 3 ) on each Hadoop nodes, where n, k are the size of the graph and the value of NCG, respectively. The Hadoop implementation of the algorithm shows that the total execution time is reduced by 50% compared with polynomial algorithms, especially in large networks concerning increasing the numbers of Hadoop nodes. The result proves the efficiency of the approach for solving the NCG problem to process big data in a parallel and distributed way.
Keywords: MapReduce, Parallelism, Negative Cost Girth, Sequential, Message Passing Interface 
Pages 197208
selecting an efficient project portfolio among available projects is a vital decision for any project manager. The main questions are which projects can have more longterm benefit for manager or organization. Due to the complexity of this field of research, todays so many approaches are developed for project selection. Calculation time and the quality of result are two main criterion that almost all researchers have considerate on them. In this research a new hybrid genetic algorithm with new heuristic mutation and cross over are developed to choosing a good portfolio of available projects Presented algorithm is fast and effective to reach the good result in reasonable time. Finding a good point to start as initial population and using good operator a heuristic mutation and cross over are main points of our algorithm. To check the quality of results we compare developed algorithm with some recent ones in the literature and comparison studies and statistical calculation demonstrate the efficiency of the new genetic algorithm to select a good portfolio.
Keywords: Project Portfolio, New Genetic Algorithm, Heuristic Mutation, Heuristic Cross over