فهرست مطالب

Engineering - Volume:33 Issue: 7, Jul 2020

International Journal of Engineering
Volume:33 Issue: 7, Jul 2020

  • تاریخ انتشار: 1399/04/29
  • تعداد عناوین: 31
|
  • H. O. Abbas * Pages 1167-1172
    Granular Pile Anchor (GPA) considers one of the solution foundation techniques, designed to mitigate the lifting of the sole resulting from expansive soils. This study work is to investigate the uplift movement response of GPA in expansive soil and evaluation performance in this soil. The effects of several parameters, such as length (L) of GPA and diameter (D), the thickness (H) of expansive clay layer and the existence sandy soil layer are investigated. The results evidenced the effectiveness and ability of GPA to reduce the lifting movement of the expansive soil and presented that the lifting movement can be decreased with rising length to some extent and the GPA diameter. The lifting movement of GPA-Foundation System is controlled by 3 separate variables, these are L/H and L/D ratios and diameter. The lifting movement can be decreased by up to (47%) if GPA is embedded in layer of expansive soil at L/H = 1, and by 83% if GPA is in expansive soil and extensive sandy soil is embedded at L/H = 1.4 with the similar GPA diameter and foundation.
    Keywords: Sand pile, expansive soil, Swelling, Granular pile anchor
  • S. Ghasemi *, G. Ghodrati Amiri, M. Mohamadi Dehcheshmeh Pages 1173-1182
    Some civil engineering-based infrastructures are planned for the structural health monitoring (SHM) system based on their importance. Identifiction and detecting damage automatically at the right time are one of the major objectives this system faces. One of the methods to meet this objective is model updating whit use of optimization algorithms in structures.This paper is aimed to evaluate the location and severity of the damage combining two being-updated parameters of the flexibility matrix and the static strain energy of the structure using augmented grey wolf optimization (AGWO) and only with extracting the data of damaged structure, by applying 5 percent noise. The error between simulated and estimated results in average of ten runs and each damage scenario was less than 3 percent which proves the proper performance of this method in detection of the all damages of the 37-member three-dimensional frame and the 33-member two-dimensional truss. Moreover, they indicate that AGWO can provide a reliable tool to accurately identify the damage in compare with the particle swarm optimizer (PSO) and grey wolf optimizer (GWO).
    Keywords: augmented grey wolf optimization algorithm, Damage detection, flexibility matrix, Modal data, static strain energy
  • K. Shaiksha Vali *, B. S. Murugan, S. K. Reddy, E. Noroozinejad Farsangi Pages 1183-1191
    Hybrid Concrete focused on development of buildings, highways, and other structures of civil engineering. In the current study, various mix combinations have been prepared and tested with different percentages of super-plasticizer at different levels of water reduction for obtaining the optimum mix. Further, study on different properties of hybrid concrete and replacement of ordinary portland cement (OPC) with ground granulated blast furnace slag (GGBFS), silica fume (SF) and glass fibers (GF) for obtaining highly cement replaced concrete (HCRC) and glass fiber concrete (GFC). The concrete performance was evaluated based on slump cone test, compressive strength test, split tensile strength test, flexural strength test, water absorption test and ultrasonic pulse velocity test. It was observed from the results that, the best performance of HCRC achieved at 50% GGBFS and 3% silica fume replacement. Further, in the case of GFC, 0.2% of glass fibers showed high performance in terms of split tensile and flexural strength at all ages. The optimized concrete mixtures like HCRC and GFC performed better than the control concrete (CC).
    Keywords: Environment, friendly Glass Fiber Concrete Highly Cement Replaced Concrete Silica Fume Super, plasticizer Ground Granulated Blast Furnace Slag
  • S. M. Mirrezaei * Pages 1192-1200

    Trust establishment is one of the significant resources to enhance the scalability and reliability of resources on cloud environments. To establish a novel trust model on SaaS (Software as a Service) cloud resources and to optimize the resource utilization of multiple user requests, an integrated software trust estimator with multi-user resource competence (IST-MRC) optimization mechanism is proposed in this paper. IST-MRC optimization mechanism combines its trustworthy properties with optimal resource allocation, on the requisition of the software apps, without any traffic occurrence in the cloud environment. Initially, a behavior trust estimator is developed in the IST-MRC mechanism to measure the trust value of the software service zone. The trust value is estimated, based on Software Availability Rate, Hit Rate, and User Feedback regarding the specific software apps. Next, the resources are optimized to multiple users using competence optimization. The competence optimization in the IST-MRC mechanism computes the processor speed, bandwidth, and latency to handle the varied traffic conditions on multiple user requests. Experiments are conducted to measure and evaluate factors, such as Successful Request Handles, Resource Utilization Efficiency, Latency Time, and Trust Success Ratio on the multiple users.

    Keywords: Bandwidth, Behavior Trust Estimator, Cloud Resources, Competence Optimization, Software-as-a-Service, User Feedback
  • M. Hassanpour *, H. Malek Pages 1201-1207

    The classification of various document image classes is considered an important step towards building a modern digital library or office automation system. Convolutional Neural Network (CNN) classifiers trained with backpropagation are considered to be the current state of the art model for this task. However, there are two major drawbacks for these classifiers: the huge computational power demand for training, and their very large number of weights. Previous successful attempts at learning document image features have been based on training very large CNNs. SqueezeNet is a CNN architecture that achieves accuracies comparable to other state of the art CNNs while containing up to 50 times less weights, but never before experimented on document image classification tasks. In this research we have taken a novel approach towards learning these  document image features by training on a very small CNN network such as SqueezeNet. We show that an ImageNet pretrained SqueezeNet achieves an accuracy of approximately 75 percent over 10 classes on the Tobacco-3482 dataset, which is comparable to other state of the art CNN. We then visualize saliency maps of the gradient of our trained SqueezeNet's output to input, which shows that the network is able to learn meaningful features that are useful for document classification. Previous works in this field have made no emphasis on visualizing the learned document features. The importance of features such as the existence of handwritten text, document titles, text alignment and tabular structures in the extracted saliency maps, proves that the network does not overfit to redundant representations of the rather small Tobacco-3482 dataset, which contains only 3482 document images over 10 classes.

    Keywords: Squeezenet, convolutional neural network, Document image classification
  • D. Bidekani Bakhtiarvand *, S. Farzi Pages 1208-1213
    Annually, web search engine providers spend a lot of money on re-ranking documents in search engine result pages (SERP). Click models provide advantageous information for re-ranking documents in SERPs through modeling interactions among users and search engines. Here, three modules are employed to predict users' clicks on SERPs simultaneously, the first module tries to predict users' click behaviors using Probabilistic Graphical Models, the second module is a Time-series Deep Neural Click Model which predicts users' clicks on documents and finally, the third module is a similarity-based measure which creates a graph of document-query relations and uses SimRank Algorithm to predict the similarity. After running these three simultaneous processes, three click probability values are fed to an MLP classifier as inputs. The MLP classifier learns to decide on top of the three preceding modules, then it predicts a probability value which shows how probable a document is to be clicked by a user. The proposed system is evaluated on the Yandex dataset as a standard click log dataset. The results demonstrate the superiority of our model over the well-known click models in terms of perplexity.
    Keywords: Click Modeling Document Re, ranking Modeling Users' Behavior Search Engine Result Page Enhancement
  • A. Aliabadian, M. R. Zahabi *, M. Mobini Pages 1214-1222
    Secret key extraction is a crucial issue in physical layer security and a less complex and, at the same time, a more robust scheme for the next generation of 5G and beyond. Unlike previous works on this topic, in which Orthogonal Frequency Division Multiplexing (OFDM) sub-channels were considered to be independent, the effect of correlation between sub-channels on the secret key rate is addressed in this paper. As an assumption, a realistic model for dependency among sub-channels is considered. Benchmarked by simulation, the result shows that the key exchange rate may decline by up to 72% due to the correlation of sub-channels. A new approach for efficient key extraction is used in this study. To do this, a Singular Value Decomposition based (SVD-based) pre-coding is utilized to alleviate the sub-channels correlation and the channel noise. The low computational complexity of our proposed approach makes it a promising candidate for developing secure and high-speed networks. Results obtained through simulation indicate that applying pre-coding on the measured correlated data resulted in a minimum gain of 9 dB. In addition, the result also depicts the advantage of SVD versus other pre-coding techniques, namely PCA, DCT, and WT.
    Keywords: Orthogonal Frequency Division Multiplexing Physical Layer Security, Secret Key, Singular Value Decomposition based Channel Decorrelation
  • M. Grailoo, B. Alizadeh * Pages 1223-1230
    The integer word-length optimization known as range analysis (RA) of the fixed-point designs is a challenging problem in high level synthesis and optimization of linear-time-invariant (LTI) systems. The analysis has significant effects on the resource usage, accuracy and efficiency of the final implementation, as well as the optimization time. Conventional methods in recursive LTI systems suffer from inaccurate range estimations due to dependency to symmetry or non-symmetry of the input range over zero, and involvement with parameter adjustments. The under estimations endanger the range safety, and generate a great error due to overflows. On the other hand, the over estimations increase the hardware costs, as well as weaken the signal, if the over estimated ranges are utilized in down-scaling. Therefore, in this paper, we propose an efficient, safe and more precise RA method to measure the range of both recursive and non-recursive fixed-point LTI systems through analytical formulation. Our main idea is to obtain the input sequences for which variables in the LTI system would be maximum and minimum. By applying these sequences to the system, the upper and lower bounds of the intended variables are obtained as the range. The proposed method enhances the bit-widths accuracy more than 34% in average in comparison with the state-of-the-arts. The results also show about 37% and 6% savings in the area and delay, respectively.
    Keywords: Analytical Formulation Fixed, Point Arithmetic High, Level Synthesis, Optimization Linear, time, Invariant System Range Analysis Resource Usage
  • A. R. Fereydouni, A. Charmin *, H. Vahdaty, H. Nasir Aghdam Pages 1231-1239
    In this paper, a combination of channel, receiver frequency-dependent IQ imbalance, and carrier frequency offset estimation under short cyclic prefix length is considered in orthogonal frequency division multiplexing system. An adaptive algorithm based on the set-membership filtering algorithm is used to compensate for these impairments. In short CP length, per-tone equalization structure is used to avoid inter-symbol interference. Due to CFO impairment and IQ imbalance in the receiver, we will expand the PTEQ structure to a two-branch structure. This structure has high computational complexity, so using the set-membership filtering idea with variable step size while reducing the average computation of the system can also increase the convergence speed of the estimates. On the other hand, applying wavelet transform on each branch of this structure before applying adaptive filters will increase the estimation speed. The proposed algorithm will be named SMF-WP-NLMS-PTEQ. The results of the simulations show better performance than the usual adaptive algorithms. Besides, estimation and compensation of channel effects, receiver IQ imbalance and carrier frequency offset under short CP can be easily accomplished by this algorithm.
    Keywords: Channel Estimation Per, Tone Equalization Structure Set, Membership Filtering Short Cyclic Prefix Wavelet Packet Transform
  • S. J. Kazemitabar *, Z. Shaeiri Pages 1240-1248
    Collecting insurance fraud samples is costly and if performed manually is very time consuming. This issue suggests the usage of unsupervised models for fraud data collection. One of the accurate methods in this regards is Spectral Ranking of Anomalies (SRA) that is shown to work better than other methods for auto-insurance fraud detection, specifically. However, this approach is not scalable to large samples and is not appropriate for online fraud detection. This is while, real-time fraud management systems are necessary to prevent huge losses. In this study, we propose an implementation methodology which makes it possible to apply the SRA to big data senarios. We exploit the power of spectral ranking of anomalies to create an estimated target variable from the unlabeled dataset. We then use two robust  models, namely, random forest and deep neural networks to fit a model based on the estimated labeled training set. Next, the incoming live data are fed to the mentioned trained models for predicting the target variable. Simulation results confirm that the proposed approach has higher speed and acceptable false alarm rate compared to existing related methods.
    Keywords: Spectral Ranking of Anomalies Auto, insurance Fraud Detection Random Forest Deep Learning
  • V. Kehri *, R. N. Awale Pages 1249-1256
    This work presents a technique for the analysis of facial electromyogram signal activities to classify five different facial expressions for computer-muscle interfacing applications. Facial electromyogram (FEMG) is a technique for recording the asynchronous activation of neuronal inside the face muscles with non-invasive electrodes. FEMG pattern recognition is a difficult task for the researcher, where classification accuracy is key concerns. Artifacts, such as eyeblink activity and electroencephalogram (EEG) signals interference, can corrupt these FEMG signals and directly affected the classification results. In this work, a robust wavelet-based thresholding technique, which comprised of a wavelet transform (WT) method and the statistical threshold, is proposed to remove the different artifacts from FEMG datasets and improve recognition accuracy rate. A set of five different raw FEMG data set was analyzed. Four wavelet basis functions, namely, haar, coif3, sym3, and bior4.4, were considered. The performance parameters (signal-to-artifact ratio (SAR) and normalized mean square error (NMSE) were utilized to understand the effect of the proposed signal denoising protocol. After denoising, the effectiveness of different statically features has been extracted. Two pattern recognition algorithms support vector machine (SVM) and the least square support vector machine (LSSVM) are implemented to classify extracted features. The performance accuracy of SVM and LSSVM classifier was evaluated and compared to know which classifier is the best for facial expression classification.  The results showed that: (i) the proposed technique for denoising, improves the performance parameter results; (ii) The proposed work gives the best 95.24% classification accuracy.
    Keywords: Facial Electromyogram Wavelet Transform Support Vector Machine Least, square Support Vector Machine
  • M. Abbasi *, M. Rafiee Pages 1257-1265
    Efficient parallelization of genetic algorithms (GAs) on state-of-the-art multi-threading or many-threading platforms is a challenge due to the difficulty of scheduling hardware resources regarding the concurrency of threads. In this paper, for resolving the problem, a novel method is proposed, which parallelizes the GA by designing three concurrent kernels, each of which are running some dependent effective operators of GA. The proposed method can be straightforwardly adapted to run on many-core and multi-core processors by using Compute Unified Device Architecture (CUDA) and Threading Building Blocks (TBB) platforms. To efficiently use the valuable resources of such computing cores in concurrent execution of the GA, threads that run any of the triple kernels are synchronized by a considerably fast switching technique. The offered method was used for parallelizing a GA-based solution of Traveling Salesman Problem (TSP) over CUDA and TBB platforms with identical settings. The results confirm the superiority of the proposed method to state-of-the-art methods in effective parallelization of GAs on Graphics Processing Units (GPUs) as well as on multi-core Central Processing Units (CPUs). Also, for GA problems with a modest initial population, though the switching time among GPU kernels is negligible, the TBB-based parallel GA exploits the resources more efficiently.
    Keywords: Genetic Algorithm Parallel Multi, core Many, core Traveling Salesman Problem
  • R. Mortazavi * Pages 1266-1273
    One of the main techniques used in data mining is data clustering, which has many applications in computer science, biology and social sciences. Constrained clustering is a type of clustering in which side information provided by the user is incorporated into current clustering algorithms. One of the well researched constrained clustering algorithms is called microaggregation. In a microaggregation technique, the algorithm divides the dataset into groups containing at least  members, where  is a user-defined parameter. The main application of microaggregation is in Statistical Disclosure Control (SDC) for privacy preserving data publishing. A microaggregation algorithm is qualified based on the sum of within-group squared error, . Unfortunately, it has been proven that the optimal microaggregation problem is NP-Hard in general, but the special univariate case can be solved optimally in polynomial time. Many heuristics exist for the general case of the problem that are founded on the univariate case. These techniques order multivariate records in a sequence. This paper proposes a novel method for record ordering. Starting from a conventional clustering algorithm, the proposed method repeatedly puts multivariate records into a sequence and then clusters them again. The process is repeated until no improvement is achieved. Extensive experiments have been conducted in this research to confirm the effectiveness of the proposed method for different parameters and datasets.
    Keywords: Constrained Clustering, Microaggregation, data privacy
  • H. Beiki, S. M. Seyedhosseini *, V. R. Ghezavati, S. M. Seyedaliakbar Pages 1274-1284
    Throughout history, nature has exposed humans to destructive phenomena such as earthquakes, floods, droughts, tornadoes, volcanic eruptions, and tropical and marine storms. The large scale of damages and casualties caused by natural disasters around the world has led to extensive applied research in the field of preparation and development of a comprehensive system for disaster management to minimize the resulting casualties and financial damages. Based on this motivation and challenges to the field, this research designs an integrated relief chain to optimize simultaneously the preparedness and response phases of disaster management. Decisions to improve the supply chain include locating distribution centers of relief supplies; the amount of inventory stored in facilities in pre-disaster phase, locating temporary care centers and transportation points of the injured, how to allocate relief services to the affected areas, and routing of the vehicles used to distribute relief supplies and evacuate the injured. The results show that decreasing the capacity of distribution centers increases the amount of shortage of supplies and increasing the capacity of these centers reduces the amount of shortage of supplies.
    Keywords: Assessment of Injured People Disaster Relief Location, Routing Problem Response Phase Uncertainty
  • M. A. Fattahzadeh, A. Saghaei * Pages 1285-1292
    Today, with the growth of technology, monitoring processes by the use of video and satellite sensors have been more expanded, due to their rich and valuable information. Recently, some researchers have used sequential images for defect detection because a single image is not sufficient for process monitoring. In this paper, by adding the time dimension to the image-based process monitoring problem, we detect process changes (such as the changes in the size, location, speed, color, etc.). The temporal correlation between the images and the high dimensionality of the data make this a complex problem. To address this, using the sequential images, a statistical approach with RIDGE regression and a Q control chart is proposed to monitor the process. This method can be applied to color and gray images. To validate the proposed method, it was applied to a real case study and was compared to the best methods in literature. The obtained results showed that it was more effective in finding the changes.
    Keywords: Control Chart Image Processing Real, Time Monitoring Ridge Regression Statistical Process Control
  • M. Khedmati *, F. Seifi, M. J. Azizi Pages 1293-1303
    Bitcoin as the current leader in cryptocurrencies is a new asset class receiving significant attention in the financial and investment community and presents an interesting time series prediction problem. In this paper, some forecasting models based on classical like ARIMA and machine learning approaches including Kriging, Artificial Neural Network (ANN), Bayesian method, Support Vector Machine (SVM) and Random Forest (RF) are proposed and analyzed for modelling and forecasting the Bitcoin price. While some of the proposed models are univariate, the other models are multivariate and as a result, the maximum, minimum and the opening daily price of Bitcoin are also used in these models. The proposed models are applied on the Bitcoin price from December 18, 2019 to March 1, 2020 and their performances are compared in terms of the performance measures of RMSE and MAPE by Diebold-Mariano statistical test. Based on RMSE and MAPE measures, the results show that SVM provides the best performance among all the models. In addition, ARIMA and Bayesian approaches outperform other univariate models where they provide smaller values for RMSE and MAPE.
    Keywords: Time series forecasting, Machine Learning, bitcoin, Multivariate Models
  • Z. Saeidi-Mobarakeh, R. Tavakkoli-Moghaddam *, M. Navabakhsh, H. Amoozad-Khalili Pages 1304-1310
    This study concentrates on designing a medical waste management system with a hierarchical structure, including a local government and a waste management planner. The upper-level seeks to design and control the waste management facilities by minimizing the environmental risks related to the disposal of medical waste. While, the lower-level model is to determine the waste collection plans by only minimizing its total operational costs. Therefore, this study develops a bi-level mathematical model, in which the benefits of the both stakeholders are taken into account. As this problem poses difficulty in searching for the optimal solution, a bi-level meta-heuristic approach based on the Genetic Algorithm (GA) is employed for solving the problem. Finally, a case study is conducted to show that the proposed model and solution approach are practical and efficient.
    Keywords: Hazardous Waste Management Location, routing Problem Bi, level Programming Meta, Heuristic Algorithm
  • H. Hernadewita, B. I. Saleh * Pages 1311-1320
    The construction project is a business full of risk in every process due to its complexity, changes, and involvement from various stakeholders. One of the critical risks in the construction project is in the supply chain. Identifying and assessing the risk with the right tools and methods in that area will inevitably affect the success of the project. Unfortunately, the research for the tools and methods in a construction supply chain is still limited and scattered. This research objective is to analyze the gap between literature and to create improvement in tools and methods for risk identification and assessment in the construction supply chain. This research will use the systematic literature review method in finding and investigating the tools and methods. The four methods that were found are: Analytical Hierical Process (AHP), Failure Mode Effect Analysis (FMEA), Supply Chain Operation Reference (SCOR), and Hazard and  Operational (HAZOP). Strength and weakness with their potential use as tools and methods for identifying and assessing the construction supply chain risk then summarized. The use of  SCOR  combined with  FMEA  methods has shown to be practical tools and methods for identifying and assessing the construction supply chain risk.
    Keywords: Construction Supply Chain, Risk identification, assessment method, Analytical Hierical Process, Hazard, Operational, Failure Mode Effect Analysis, Supply Chain Operation Reference
  • A. Salmi *, L. Bousshine, K. Lahlou Pages 1321-1330
    This paper presents a new model of equivalent modulus derived from the Repeated Load CBR (RL-CBR) test without strain gauge. This model is an updated version of Araya et al. model (2011), the update consists of using the vertical strain as weighting factor instead of vertical displacement in the mean vertical and horizontal stresses calculation. The accuracy of equivalent modulus was improved by decreasing the relative error from 25% to 3%. The extra-large mold adopted by Araya et al. is used with a thickness of 8 mm instead of 14.5 mm. In experimental investigations, equivalent modulus may be calculated from experimental data and model parameters estimated by finite element (EF) simulation. There are five model parameters when the RL-CBR test is used, and three parameters when the strain gauge is not used. Model parameters are determined in two steps. First, the FE simulation of the RL-CBR test is conducted using various loading conditions (i.e., plunger penetration) and various quality ranges of unbound granular materials (UGM). In the second step, the non-linear multidimensional regression is accomplished to fit the equivalent modulus to Young’s modulus. The influence of FE analysis inputs is investigated to find the optimal inputs set that make the best compromise between the model accuracy and the calculation time consumption. The calculation of model parameters is carried out based on the optimal set data. Results from the new model and those from Araya et al. model are compared and have shown the improved accuracy of the developed model.
    Keywords: Modulus CBR Granular Material Least, squares Method Stiffness Strain Gauge Resilient Behavior
  • M. A. Aswad *, S. H. Awad, A. H. Kaayem Pages 1331-1339
    For the past decades researchers are showing immense interest to investigate the natural advantage of preparation of composites from minerals such as bauxite particles, and proved their effectiveness as cost effective reinforcing agents in fabrication of high performance composites. This study, is a new attempt in using the Iraqi natural bauxite powder with different proportions (2, 4 and 6 wt%) in preparation of aluminum metal matrix composites (AMMCs) using stir casting and Mg additives. In experimental work, the bauxite stones were crashed and milled, then the powder was fired at 1400 ○C. The powders were characterized using particle size, XRD and XRF analysis. The AMMCs casts were machined, polished, preheated, and their properties were characterized using hardness measurements, microstructural observations, and calculation of their Young's modulus, Poisson’s ratio and fracture toughness. Also, their fracture toughness were evaluated by means of crack mouth opening displacement (CMOD) measurements from extensometer recordings. The results proved the successful production of AMMCs with improved fracture toughness, hardness and elastic modulus properties using Mg and Iraqi fired bauxite added at 2 and 4 wt% by stir casting. Moreover, results from CMOD measurements showed the effect of addition bauxite particles at 2 and 4 wt% in increasing "maximum load at failure" and "critical CMOD at critical load" of the matrix materials to about " 25 and 44%" , and " 32 and 47%", respectively. Also, at these ratios, the calculated fracture toughness of the matrix materials by means of KIC, and young modulus showed improvement at about “22 and 69%”, and “8 and 12%”, respectively. Addition of bauxite at 6% could not give the required improvement in the fracture toughness despite its effects in recording the highest improvements in hardness (57%)   and elastic modulus (22%) due to the brittle behavior of AMMCs at this ratio.
    Keywords: Alumium, based Composite Material Iraqi Bauxite Stir Casting Crack Opening Displacement Brinell Hardness Fracture Toughness Mechanical Properties
  • T. A. Hafiz * Pages 1340-1346
    The properties of Carbon Fiber–Reinforced Polymers (CFRP) are greatly affected under extreme environmental conditions. This paper reports  an experimental study to determine the response of IM7-carbon/977-2 cycom epoxy laminates under different humidity and temprature conditions. Short-term 3-point bending creep tests using Dynamic Mechanical Analysis (DMA) were used to test the dry and saturated samples at various temperature levels.  The dry coupons were tested at  the room temperature (RT) and at 60-120 °C with 20 °C increment and then at 130 °C, 150-180 °C with 10 °C increment for each next test. The saturated (wet) coupons were tested at RT, 40 - 120 °C with 10 °C increment in temperature for each next test and at 145 °C, 150 °C, and 160 °C. The time-temperature shift factor (TTSF) was applied and it is shown that the viscoelastic behavior of the invetigated IM7-carbon/977-2 epoxy laminates, is accurately predicted through the use of TTSF. It has also been shown that determining the viscoelastic behavior at elevated temperatures helps to predict temperature below the glass transition temperature using TTSF. The long-term life of the material is relatively easily predicted using TTSF by conducting traditional short-term laboratory tests.
    Keywords: Carbon Fiber Reinforced Polymers, Dry, wet coupons, high temperature, Moisture, Storage modulus, Time Temperature Shift Factor
  • A. Fekari *, N. Javani, S. Jafarmadar Pages 1347-1353
    The present study aims to investigate the amount of exhaust gases emissions of a 4-cylinder gasoline-ignition engine. An experimental study of an ignition engine management system has been conducted for emissions optimization, using Winols specialized software. In order to achieve a steady state conditions in the experiments, the temperature of the water and engine oil before each test reached the engine's working temperature (90°C) to allow various parts of the engine to remain stable and the tests are performed in in-line engine operation. Two sets of tests with idle (850-900 rpm) and mid-range (2500 rpm) are considered. Experiments were performed for three identical engines with different mileages and obtained results were discussed. According to the obtained results, after applying changes to the engine management system, a 22% reduction in the unburned hydrocarbon emission in both cases was obtained. Furthermore, it is found that 31 and 5% reduction in carbon monoxide emissions in the idle and mid-range were obtained, respectively. As a result of applying these changes, there was a reduction of 1.4% in NOx emission in the idle case and a decrease in about 19% at 2500 rpm.
    Keywords: Engine management system, Exhaust gases, Exhaust gases emission, Spark advance
  • S. M. Moghimi, M. Abbasi *, M. Khaki Jamei, D. D. Ganji Pages 1354-1363
    The purpose of the present study is to investigate the boundary layer separation point in a magnetohydrodynamics diffuser. As an innovation, the Re value on the separation point is determined for the non-Newtonian fluid flow under the influence of the non-uniform magnetic field due to an electrical solenoid, in an empirical case. The governing equations including continuity and momentum are solved by applying the semi-analytical collocation method (C.M.). The analysis revealed that for specific values of De from 0.4 to 1.6, α from 20o to 2.5o and Ha from zero to 8, the Re value on the separation point is increased from 52.94 to 1862.78; thus, the boundary layer separation postponed. Furthermore, the impact of the magnetic field intensity on the separation point is analyzed from the physical point of view. It is observed the wall shear stress increases by increasing magnetic field intensity that leads to delaying the boundary layer separation.
    Keywords: Diffuser Non, uniform Magnetic Field Separation Point Second Grade
  • D. Changbin *, L. Yongping, W. Yongqiao Pages 1364-1374
    In order to study the dynamic meshing characteristics of the elliptic cylinder gear and obtain the meshing error of the gear transmission system, the two-dimensional static contact analysis of the gear tooth surface is carried out using ANSYS software, and the key parts of the contact area of the tooth surface are determined. Then, the dynamic meshing model of the elliptic cylinder gear is established and the dynamic contact process under load is simulated by ANSYS LS-DYNA software. The distribution law of effective plastic strain, effective stress and pressure of the driving and driven wheels are obtained. On this basis, the distribution law of meshing error is obtained by calculation. The results show that the distribution of stress, strain and tooth surface pressure during tooth meshing is related to the position of the tooth on the elliptical pitch curve. The position of the tooth on the pitch curve and the load it bears has a certain influence on the meshing error. The results of this research can provide some guidance for subsequent study of transmission error of non-circular gears, gears modification and engineering applications.
    Keywords: Elliptical Cylinder Gear, Static contact, finite element analysis, dynamic meshing characteristics, meshing error
  • M. Akbari, G. Shafabakhsh *, M. R. Ahadi Pages 1375-1386
    This study aims to evaluate the impact of pavement physical characteristics on the frequency of single-vehicle run-off-road (ROR) crashes in two-lane separated rural highways. In order to achieve this goal and to introduce the most accurate crash prediction model (CPM), authors have tried to develop generalized linear models, including the Poisson regression (PR), negative binomial regression (NBR)), and non-linear negative binomial regression models. Besides exposure parameters, the examined pavement physical characteristics explanatory variables contain pavement condition index (PCI), international roughness index (IRI) and ride number (RN). The forward procedure was conducted by which the variables were added to the core model one by one. In the non-linear procedure and at each step, 39 functional forms were checked to see whether the new model gives better fitness than the core/previous model. Several measurements were taken to assess the fitness of the model. In addition, other measurements were employed to estimate an external model validation and an error structure. Results showed that in PR and NBR models, variables coefficients were not significant. Findings of the suggested nonlinear model confirmed that PCI, as an objective variable, follows the experts anticipation (i.e., better pavement manner associates with less ROR crashes). Finally, it should be noted that the roughness variable was insignificant at the assumed significance level, so it had no contribution to ROR crashes. The results imply that improving the pavement condition leads to a more probable decrease in the ROR crashes frequency.
    Keywords: Highway Segmentation Pavement Physical Characteristics Crash Prediction Models Run, Off, road Accidents Functional Forms Nonlinear Negative Binomial Regression
  • K. Divsalar, R. Shafaghat *, M. Farhadi, R. Alamian Pages 1387-1396
    This work presents a numerical Simulation of an underwater glider to investigate the effect of angle of attack on the hydrodynamic coefficients such as lift, drag, and torque. Due to the vital role of these coefficients in designing the controllers of a glider, and to obtain an accurate result, this simulation has been carried on at a range of operating velocities. The total length of the underwater glider with two wings is 900 mm with a 4-digits NACA0009 profile. The fluid flow regime is discretized and solved by computational fluid dynamics and finite volume method. Since the Reynolds number range for this study is in a turbulent flow state (up to 3.7e06), the κ-ω SST formulation was used to solve Navier-Stokes equations and continuity and the angles of attack ranging are from - 8 to 8 degrees. The main purpose of this research is to study the effect of each of the dynamics parameters of glider motion such as velocity and angle of attacks on the hydrodynamic coefficients. Based on the results, the drag and lift coefficients are enhanced with increasing the angle of attack. In addition, the drag coefficient enhanced with increasing the velocity however, when the glider velocity is increased, the lift coefficient does not change significantly except at the highest angle of attack that decreases. The highest drag coefficient is 0.0246, which corresponds to the angle of attack of -8 and the Reynolds number of 3738184. In addition to simple geometry, the glider studied in this paper shows relatively little resistance to flow.
    Keywords: Underwater gliders, numerical simulation, finite volume method, Hydrodynamic coefficients, Angle of attack
  • A. Vyas *, J. Menghani, H. Natu Pages 1397-1402
    In spite of excellent corrosion resistance, good ductility and low cost of AISI 316 austenitic stainless steel, the low hardness and poor mechanical characteristic of material restricts its applicability in several industrial services. To improve the mechanical properties AlFeCuCrCoNi-WC10 high-entropy alloy coatings were deposited via laser cladding on austenitic stainless steel AISI 316 substrate. The influence of WC on phase constituents, microstructure, microhardness and elemental distribution were investigated using X-ray diffractometry, optical microscopy microhardness tester and FESEM-EDS (Energy Dispersive Spectroscopy), respectively. The XRD peaks revealed that as clad AlFeCuCrCoNi-WC10 multiple principal element alloy coating composed of BCC, FCC and W-rich phase. The cladding zone microstructure is mainly consisting of fine-grained non-directional and equiaxed crystals away from the base material and columnar grains near the base material. The energy dispersive spectroscopy indicated segregation of W and Cr in the interdendritic region.  However, other elements of the multiple principal element alloy are observed to be uniformly distributed throughout the cladding. The microhardness of the AlFeCuCrCoNi-WC10 (670 Hv0.5)high entropy alloy coating was 4.5 times greater than that of substrate AISI-316.
    Keywords: Laser, assisted Cladding Microhardness High Entropy Alloy Microstructure
  • Z. H. Wu, Y. Q. Xu *, K. A. Liu, Z. Y. Chen Pages 1403-1412
    The shaft deflection is one of critical factors that deteriorate the grease lubrication state inside the tapered roller bearings (TRBs). So, in this paper, on the premise of the TRB subjected to the combined loads and the shaft is deflected, the grease lubrication model at the TRB contacts was constructed, in which the interaction loads, linear and angular displacements of the bearing parts were involved. Then the grease film characteristics were numerically analyzed from the perspective of the whole bearing to clarify the negative effects of the shaft deflection on grease film characterizes. The results show that the effect of the shaft deflection on the load distribution in the absence of the radial load is greater than that in the presence of the radial load. The angular misalignment of the roller is mainly affected by the deflection. The deflection results in an irregular film shape and pressure profile at the TRB contacts, and induces a significant pressure spike and necking feature at the roller-end.
    Keywords: film characteristics, grease lubrication, shaft deflection, tapered roller bearing
  • A. Hajarian, M. Osanloo * Pages 1413-1422
    Today, during the life of an open pit mine, million tons of materials, including waste and ore, are displaced by truck fleets. In the case of a shallow ore deposit, which is located up to 300 meters to the ground surface, depending on preliminary equipment size and capacity, it will take three to five years to remove overburden and waste rocks to expose the ore body. In that period, the main waste dump site will be used as a disposal of waste dump. Apart from considering the characteristics of the waste dump location such as geological and geotechnical properties, the major factors influencing the hauling process are topography, hauling length and construction cost of the haul road. Truck transportation cost depending on the circumstances comprises 45 to 60% of the cost of mining of one tonne ore. Thus, well site selection of waste dump in coordination with the main haul road path confidently leads to a significant saving of economic resources. In this research, while identifying the effective factors in selecting the waste dump sites, a linear mathematical model is developed to find a suitable site for waste dump disposal considering minimizing haul road construction cost.
    Keywords: Mine haul road, waste dump, Earthwork optimization, Linear Programming
  • M. Dvoynikov, A. Kunshin *, P. Blinov, V. Morozov Pages 1423-1430
    Present article results of study on possibility of increasing the efficiency of drilling directional straight sections of wells using screw downhole motors (SDM) with a combined method of drilling with rotation of drilling string (DS). Goal is to ensure steady-state operation of SDM with simultaneous rotation of DS by reducing the amplitude of oscillations with adjusting the parameters of drilling mode on the basis of mathematical modeling for SDM – DS system. Results of experimental study on determination of extreme distribution of lateral and axial oscillations of SDM frame depending on geometrical parameters of gerotor mechanism and modes ensuring stable operation are presented. Approaches to develop a mathematical model and methodology are conceptually outlined that allow determining the range of self-oscillations for SDM – DS system and boundaries of rotational and translational wave perturbations for a heterogeneous rod with an installed SDM at drilling directional straight sections of well. This mathematical model of SDM – DS system's dynamics makes it possible to predict optimal parameters of directional drilling mode that ensure stable operation of borehole assembly.
    Keywords: drilling control, drilling dynamics, Directional Drilling, horizontal well, bit shock, bit vibration
  • K. R. Argimbaev *, D. N. Ligotsky, K. V. Mironova, E. V. Loginov Pages 1431-1439
    The inevitable depletion of mineral resources, the constant deterioration of the geological and mining conditions for the development of mineral deposits and the restoration of raw materials from mining waste by recycling are all urgent problems we faced today. The solution to this problem may ensure: a considerable extension of raw material source; decrease of investments in opening new deposits; cost savings for dumping and handling of tailing dumps, disturbed land remediation; obtaining social and economic effect due to a considerable reduction in pollution of the environment. This article deals with the study of iron-containing tailings dumped at the tailing dumps of ore-refinery and processing facilities located in Kursk Magnetic Anomaly (KMA GOKs), where samples were taken for this study. The article contains the results of the materials composition study, namely: chemical composition, the mineral-petrographic study of thin and polished sections, grain size distribution and physical-mechanical properties of tailing samples. Regularities were revealed for the change of the useful component content due to gravity differentiation. It was also noted that the sulphur content increased near the pulp discharge outlet due to pyrite accumulation. The ratio of ore minerals in tailings and the fineness ratio of the sand fraction were measured. The examination with a focused beam microscope with x90 to x600 magnification showed a variety of grain sizes and shapes that facilitate using tailing materials after additional processing in the construction industry as sand.
    Keywords: material composition, polished section, samples, Tailings, thin section