فهرست مطالب
 Volume:17 Issue: 1, 2023
 تاریخ انتشار: 1402/01/10
 تعداد عناوین: 12


Pages 118
Balanced acceptance sampling is a relatively new sampling scheme that has potential to improve the efficiency of spatial studies. There are two drawbacks of the design, it can have low entropy and some of the unbiased estimates can not be calculated. In this paper, such shortcomings have been addressed by integrating simple random sampling with balanced acceptance sampling. In a simulation study on two datasets, the entropy and spatially balance of the introduced sampling design are calculated and are compared with the same results from balanced acceptance sampling and simple random sampling. Simulation results show that the introduced sampling design has the flexibility to balance the entropy and spatially balance.
Keywords: Entropy of sampling design, Halton sequence, inclusion probability, spatially balanced sampling 
Pages 1944
Recently, quantile regression (QR) models are often applied for longitudinal data analysis. When the distribution of responses seems to be skew and asymmetric due to outliers and heavytails, QR models may work suitably. In this paper, a semiparametric quantile regression model is developed for analysing continuous longitudinal responses. The error term's distribution is assumed to be Asymmetric Laplace (AL) distribution for modeling the continuous responses. The correlation of longitudinal responses belong to the same individual is taken into account by using a randomeffects approach. We use the local polynomial kernel to approximate the nonparametric part of the model. The parameter estimation procedure is performed under a Bayesian paradigm using the Gibbs sampling method. The performance of the model is evaluated in a simulation study. To show the proposed model's application, a Peabody Individual Achievement Test (PIAT) dataset is analyzed.
Keywords: Semiparametric Quantile regression, continuous longitudinal data, local polynomial kernel, asymmetric Laplace distribution, semiparametric model, Gibbs sampling 
Pages 4562
In this paper, the twostage procedure is considered for autoregressive parameters estimation in the porder autoregressive model ( AR(p)). The point estimation and fixedsize confidence ellipsoids construction are investigated which are based on leastsquares estimators. Performance criteria are shown including asymptotically risk efficient, asymptotically efficient, and asymptotically consistent. Monte Carlo simulation studies are conducted to investigate the performance of the twostage procedure. Finally, realtimeseries data is provided to investigate to the applicability of the twostage procedure.
Keywords: Asymptotically consistent, asymptotically efficient, asymptotically risk efficient, fixed size confidence ellipsoids, twostage procedure 
Pages 6394
We consider the problem of model selection in vector autoregressive model with Normal innovation. Tests such as Vuong's and Cox's tests are provided for order and model selection, i.e. for selecting the order and a suitable subset of regressors, in vector autoregressive model. We propose a test as a modified loglikelihood ratio test for selecting subsets of regressors. The Europe oil prices, Brent, and the real gross domestic product, GDP, data are considered as real data. Since the Brent data does Grangercause the GDP data, so we suggest the vector autoregressive model and select optimal model based on the model selection test. The analysis provides analytic results show that the Vuong's, Cox's and proposed test are the appropriate test for order and model selection for vector autoregressive models with Normal innovation. In simulation study, the power of proposed test at least is as good as the power of Vuong's test.
Keywords: Cox's test, maximum likelihood estimation, misspecified model, nested models, vector autoregressive model, Vuong's test 
Pages 95111
In this paper, we introduce a new skewed distribution of which normal and power normal distributions are two special cases. This distribution is obtained by taking geometric maximum of independent identically distributed power normal random variables. We call this distribution as the power normalgeometric distribution. Some mathematical properties of the new distribution are presented. Maximum likelihood estimates of parameters are obtained via an EM algorithm. Simulation experiments have been presented to evaluate the performance of the maximum likelihood. We analyze two data sets for illustrative purposes. Finally, we derive a bivariate version of the proposed distribution.
Keywords: Geometric distribution, power normal distribution, EM algorithm 
Pages 113133
This paper considers parameter estimations in Lomax distribution under progressive typeII censoring with random removals, assuming that the number of units removed at each failure time has a binomial distribution. The maximum likelihood estimators (MLEs) are derived using the expectationmaximization (EM) algorithm. The Bayes estimates of the parameters are obtained using both the squared error and the asymmetric loss functions based on the Lindley approximation. We compare the performance of our procedures using a simulation study and real data.
Keywords: Bayes estimator, binomial censoring scheme, EM algorithm, maximum likelihood estimator, Lomax distribution, Lindley approximation, type II progressive censoring 
Pages 135156
The nonparametric estimation(NE) of kernel polynomial regression (KPR) model is a powerful tool to visually depict the effect of covariates on response variable, when there exist unstructured and heterogeneous data. In this paper we introduce KPR model that is the mixture of nonparametric regression models with bootstrap algorithm, which is considered in a heterogeneous and unstructured framework. Also, the optimal properties of estimators have been considered. Finallly, we have studied a real heterogeneous and unstructured data using the KPR model.
Keywords: COVID19, nonparametric estimation, Kernel polynomial regression model, pridiction analysis, graphical model 
A Bayesian Nominal Regression Model with Random Effects for Analysing Tehran Labor Force Survey DataPages 157170
Large survey data are often accompanied by sampling weights that reflect the inequality probabilities for selecting samples in complex sampling. Sampling weights act as an expansion factor that, by scaling the subjects, turns the sample into a representative of the community. The quasimaximum likelihood method is one of the approaches for considering sampling weights in the frequentist framework. To obtain it the ordinary loglikelihood is replaced by the weighted loglikelihood. There is a Bayesian framework as a counterpart to quasimaximum likelihood method is called Bayesian pseudo posterior estimator. This method is the usual Bayesian approach by replacing likelihood with quasilikelihood function. Another approach for considering sampling weights called the Bayesian weighted estimator. This method is in fact a data augmentation method in which a quasirepresentative sample is generated by sampling instead of the observed data using normalized sampling weights. In this paper, these two approaches are used for parameter estimation of a nominal regression model with random effects. The proposed method is applied to small area estimates for the Tehran labor force survey in 2018.
Keywords: Bayesian approach, labor force survey, nominal data, random effects, sampling weights, small area estimation 
Pages 171190
In this paper, a onesample point predictor of the random variable X is studied. X is the occurrence of an event in any successive visits $L_i$ and $R_i$ :i=1,2…,n (interval censoring). Our proposed method is based on finding the expected value of the conditional distribution of X given $L_i$ and $R_i$ (i=1,2…,n). To make the desired prediction, our approach is on the basis of approximating the unknown Weibull parameters using the midpoint approximation and approximate maximum likelihood (AML). After obtaining the parameter estimation, the prediction of X can be made. Moreover, the 95% bootstrap confidence intervals of unknown parameters and the 95% bootstrap prediction bounds of X are presented. The performance of the proposed procedure based on the mean squared error (MSE) and the average width (AW) of the confidence interval is investigated by employing Monte Carlo simulation. A Real data set is also studied to illustrate the proposed procedure.
Keywords: Approximate maximum likelihood estimator, Bootstrap samples, interval censoring, mean squared prediction error, midpoint approximation, monte Carlo simulation, onesample prediction 
Pages 191214
This article has been compiled to identify the best model of housing price forecasting using machine learning methods with maximum accuracy and minimum error. Five important machine learning algorithms are used to predict housing prices, including Nearest Neighbor Regression Algorithm (KNNR), Support Vector Regression Algorithm (SVR), Random Forest Regression Algorithm (RFR), Extreme Gradient Boosting Regression Algorithm (XGBR), and the Long ShortTerm Memory Neural Network Algorithm (LSTM). This research has been done using the data of the Statistics Center of Iran, which contains information on the purchase and sale of residential units in Tehran in the years 2014 to 2020 and includes 998299 transactions and 11 features. Loss of data, batch data conversion, normalization, etc. are performed on the housing data set to obtain the final and errorfree data set. To divide the data set into training and test data sets, the important and practical method of crossvalidation or KFold has been used because of its simplicity and effectiveness and as a universally valid method. Various evaluation criteria such as MSE, RMSE, MAE,ME and R2 were used to compare the models and identify the best model. Comparison of models in terms of all evaluation criteria in all Kfold subsets proves the stability and superiority of the Extreme Gradient Boosting Regression model.
Keywords: Housing price forecasting, nearest neighbor regression, random forest regression, support vector regression, long shortTerm memory neural network, and extreme gradient boosting regression. 
Pages 215234
In this study, a mixed $delta$shock model with discretetime is defined by combining $delta$shock and extreme shock models. In this model, a system with multiple states fails in two ways: first, when k interarrival times between two consecutive shocks with magnitude larger than the critical threshold $gamma$ are in $[delta_1, delta _2], delta_1 < delta _2$; and second, when the interarrival time between two consecutive shocks is less than $delta_1$. The lifetime of the system and the Markov chain of the system's lifetime under the proposed mixed $delta$shock model is obtained. Also, the mean lifetime of the system is calculated and a numerical example for validating the analytical results is established here.
Keywords: Discrete time, extreme shocks, interarrival times, lifetime, markov chain, mixed $delta$shock model 
Pages 235251
Covariance functions and variograms play a fundamental role in exploratory analysis and statistical modelling of spatial and spatiotemporal datasets. In this paper, we construct a new class of spatial covariance functions using the Fourier transform of some higherorder kernels. Moreover, we extend this class of spatial covariance functions to the spatiotemporal setting using the idea used in Ma (2003).
Keywords: Bochner's theorem, characteristic function, covariance model, higherorder Kernels, spatial data