فهرست مطالب

Journal of Iranian Statistical Society
Volume:19 Issue: 1, Spring 2020

  • تاریخ انتشار: 1399/04/31
  • تعداد عناوین: 10
|
  • Wei Lin*, Xiang Li, Augustine Wong Pages 1-19

    Although the random sum distribution has been well-studied in probability theory, inference for the mean of such distribution is very limited in the literature. In this paper, two approaches are proposed to obtain inference for the mean of the Poisson-Exponential distribution. Both proposed approaches require the log-likelihood function of the Poisson-Exponential distribution, but the exact form of the log-likelihood function is not available. An approximate form of the log-likelihood function is then derived by the saddlepoint method. Inference for the mean of the Poisson-Exponential distribution can either be obtained from the modified signed likelihood root statistic or from the Bartlett corrected likelihood ratio statistic. The explicit form of the modified signed likelihood root statistic is derived in this paper, and a systematic method to numerically approximate the Bartlett correction factor, hence the Bartlett corrected likelihood ratio statistic is proposed. Simulation studies show that both methods are extremely accurate even when the sample size is small.

    Keywords: Bartlett Correction, p-value Punction, Saddlepoint Approximation, Signed Likelihood Root
  • Ahmed Alkhateeb, Zakariya Algamal* Pages 21-37

    The Liu estimator has consistently been demonstrated to be an attractive shrinkage method for reducing the effects of multicollinearity. The Poisson regression model is a well-known model in applications when the response variable consists of count data. However, it is known that multicollinearity negatively affects the variance of the maximum likelihood estimator (MLE) of the Poisson regression coefficients. To address this problem, a Poisson Liu estimator has been proposed by numerous researchers. In this paper, a Jackknifed Liu-type Poisson estimator (JPLTE) is proposed and derived. The idea behind the JPLTE is to decrease the shrinkage parameter and, therefore, improve the resultant estimator by reducing the amount of bias. Our Monte Carlo simulation results suggest that the JPLTE estimator can bring significant improvements relative to other existing estimators. In addition, the results of a real application demonstrate that the JPLTE estimator outperforms both the Poisson Liu estimator and the maximum likelihood estimator in terms of predictive performance.

    Keywords: Multicollinearity, Liu Estimator, Poisson Regression Model, Shrinkage, Monte Carlo Simulation
  • Jaber Kazempoor, Arezou Habibirad*, Kheirolah Okhli Pages 39-57

    In recent decades, studying order statistics arising from independent and not necessary identically distributed (INID) random variables has been a main concern for researchers. A cumulative distribution function (CDF) of these random variables (Fi:n) is a complex manipulating, long time consuming and a software-intensive tool that takes more and more times. Therefore, obtaining approximations and boundaries for Fi:n and other theoretical properties of these variables, such as moments, quantiles, characteristic function, and some related probabilities, has always been a main chal- lenge. Recently, Bayramoglu (2018) provided a new definition of ordering, by point to point ordering Fi’s (D-order) and showed that these new functions are CDFs and also, the corresponding random variables are independent. Thus, he suggested new CDFs (F[i]) that can be used as an alternative of Fi:n. Now with using, just F[1], and F[n], we have found the upper and lower bounds of Fi:n. Furthermore, specially a precisely approximation for F1:n and Fn:n (F1;n:n). Also in many cases approximations for other CDFs are derived. In addition, we compare approximated function with those oered by Bayramoglu and it is shown that our results of these proposed functions are far better than D-order functions.

    Keywords: Approximation, bounds, cumulative distribution function, independent, not necessary identically distributed, order statistics
  • Ahmad Ehyaei*, Kasra Alishahi, Ali Shojaei Pages 59-68

    For multiple testing problems, Benjamini and Hochberg (1995) proposed the false discovery rate (FDR) as an alternative to the family-wise error rate (FWER). Since then, researchers have provided many proofs to control the FDR under different assumptions. Storey et al. (2004) showed that the rejection threshold of a BH step-up procedure is a stopping time with respect to the reverse filtration generated by the p-values and proposed a new proof based on the martingale theory. Following this work, martingale methods have been widely used to establish FDR control in various settings, but have been primarily applied to reverse filtration only. However, forward filtration can be more amenable for generalized and adaptive FDR controlling procedures. In this paper, we present a new proof, based on forward filtration, for step-down FDR controlling procedures that start from small p-values and update the rejection regions as larger p-values are observed.

    Keywords: Benjamini-Hochberg, False Discovery Rate, Step-down Methods
  • Farkhondeh Alsadat Sajadi*, MohammadHossein Poursaeed, Sareh Goli Pages 69-83

    In this paper, we study an (n-k+1)-out-of-n system by adopting their components to be statistically independent though nonidentically distributed. By assuming that at least m components at a fixed time have failed while the system is still working, we obtain the mixture representation of survival function for a quantity called the conditional inactivity time of failed components in the system. Moreover, this quantity for (n-k+1)-out-of-n system, in one sample with respect to k and m and in two samples, are stochastically compared.

    Keywords: Conditional Inactivity Time, Nonidentical Components, Residual Lifetime
  • Ashkan Khalifeh, Eisa Mahmoudi*, Ali Dolati Pages 85-120

    In this paper, two-stage and purely sequential estimation procedures are considered to construct fixed-width confidence intervals for the reliability parameter under the stress-strength model when the stress and strength are independent exponential random variables with different scale parameters. The exact distribution of the stopping rule under the purely sequential procedure is approximated using the law of large numbers and Monte Carlo integration. For the two-stage sequential procedure, explicit formulas for the distribution of the total sample size, the expected value and mean squared error of the maximum likelihood estimator of the reliability parameter under the stress-strength model are provided. Moreover, it is shown that both proposed sequential procedures are finite, and in exceptional cases, the exact distribution of stopping times is degenerate distribution at the initial sample size. The performances of the proposed methodologies are investigated with the help of simulations. Finally using real data, the procedures are clearly illustrated.

    Keywords: Law of Large Numbers, Purely Sequential Sampling, Stopping Rule, Two-stage Sequential Sampling
  • Hamid Karamikabir, Mahmoud Afshari*, Haitham M. Yousof, Morad Alizadeh, Gholamhossien Hamedani Pages 121-161

    Statistical distributions are very useful in describing and predicting real world phenomena. Consequently, the choice of the most suitable statistical distribution for modeling given data is very important. In this paper, we propose a new class of lifetime distributions called the Weibull Topp-Leone Generated (WTLG) family. The proposed family is constructed via compounding the Weibull and the Topp-Leone distributions. It can provide better fits and is very flexible in comparison with the various known lifetime distributions. Several general statistical properties of the WTLG family are studied in details including density and hazard shapes, limit behavior, mixture representation, skewness and kurtosis, moments, moment generating function, incomplete moment. Different methods have been used to estimate its parameters. The performances of the estimators are numerically investigated. We have discussed inference on the new family based on the likelihood ratio statistics for testing some lifetime distributions. We assess the performance of the maximum likelihood estimators in terms of the biases and mean squared errors by means of a simulation study. The importance and flexibility of the new family are illustrated by means of two applications to real data sets.

    Keywords: Generating Function, Lifetime Distributions, Maximum Likelihood Estimation, Quantile Function, Topp-Leone Distribution, Weibull Distribution
  • Selim Orhun Susam* Pages 163-183

    The purpose of this paper is to introduce a new estimation method for estimating the Archimedean copula dependence parameter in the non-parametric setting. The estimation of the dependence parameter has been selected as the value that minimizes the Cramér-von-Mises distance which measures the distance between Empirical Bernstein Kendall distribution function and true Kendall distribution function. A Monte Carlo study is performed to measure the performance of the new estimator and compared to conventional estimation methods. In terms of estimation performance, simulation results show that the proposed Minumum Cramér-von-Mises estimation method has a good performance for low dependence and small sample size when compared with the other estimation methods. The new minimum distance estimation of the dependence parameter is applied to model the dependence of two real data sets as illustrations.

    Keywords: Cramér-von-Mises, Archimedean copula, Parameter estimation, Bernstein polynomials
  • Ganapathi Rajesh, Enchakudiyil Ibrahim Abdul Sathar*, Krishnan Vijayalekshmi Ammal Reshmi Pages 185-208

    Di Crescenzo and Longobardi (2002) has been proposed a measure of uncertainty related to past life namely past entropy. The present paper addresses the question of extending this concept to bivariate set-up and study some properties of the proposed measure. It is shown that the proposed measure uniquely determines the distribution function. Characterizations for some bivariate lifetime models are obtained using the proposed measure. Further, we define new classes of life distributions based on this measure and properties of the new classes are also discussed. We also proposed a non-parametric kernel estimator for the proposed measure and illustrated performance of the estimator using a numerical data.

    Keywords: Bivariate Reversed Hazard Rate, Bivariate Mean Inactivity Time, Non-parametric Kernel Estimation, Past Entropy
  • Zubair Ahmad, Gholamhossien Hamedani*, Mohammed Elgarhy Pages 209-228

    In this paper a new method of introducing an additional parameter to a continuous distribution is proposed, which leads to a new class of distributions,
    called the weighted exponentiated family. A special sub-model is discussed. General expressions for some of the mathematical properties of this class such as the moments, quantile function, generating function and order statistics are derived; and certain characterizations are also discussed. To estimate the model parameters, the method of maximum likelihood is applied. A simulation study is carried out to assess the finite sample behavior of the maximum likelihood estimators. Finally, the usefulness of the proposed method via two applications to real data sets is illustrated.

    Keywords: Exponentiated Family, Weighted Family, Exponential Distribution, Moments, Order Statistics, Characterizations, Maximum Likelihood Estimation