فهرست مطالب

  • Volume:18 Issue:1, 2019
  • تاریخ انتشار: 1398/03/11
  • تعداد عناوین: 10
|
  • Masoud Amiri, Muhyiddin Izadi*, Baha, Eldin Khaledi Pages 1-16
    In this paper, we study the problem of optimal allocation of insurance layers  for a portfolio of i.i.d exponential risks. Using the first stochastic dominance criterion, we obtain an optimal allocation  for the total  retain risks faced by a policyholder. This result partially generalizes the known result in the literature for deductible as well as policy limit coverages.
    Keywords: Deductible policy‎, ‎First stochastic dominance‎, ‎Majorization‎, ‎policy limit‎, ‎Schur-convex functions‎, ‎Utility functions‎
  • Yegnanew Shiferaw*, Jacqueline Galpin Pages 17-51
    Area level linear mixed models can be generally applied to produce small area indirect estimators when only aggregated data such as sample means are available. This paper tries to fill an important research gap in small area estimation literature, the problem of constructing confidence intervals (CIs) when the estimated variance of the random effect as well as the estimated mean squared error (MSE) is negative. More precisely, the coverage accuracy of the proposed CI is of the order O(m)-3/2, where m is the number of sampled areas. The performance of the proposed method is illustrated with respect to coverage probability (CP) and average length (AL) using a simulation experiment. Simulation results demonstrate the superiority of the proposed method over existing naive CIs. In addition, the proposed CI based on the weighted estimator is comparable with the existing corrected CIs based on empirical best linear unbiased predictor (EBLUP) in the literature.
    Keywords: confidence interval‎, ‎weighted estimator‎, ‎basic area level model‎, ‎small area estimation
  • Shirin Shoaee* Pages 53-87
    Many distributions have been presented with bathtub-shaped failure rates for real-life data. A two-parameter distribution was defined by Chen (2000). This distribution can have a bathtub-shaped or increasing failure rate function. In this paper, we consider two bivariate models based on the proposed  distribution by Chen and use the proposed methods of Marshall and Olkin (1967) in the bivariate case and Marshall and Olkin (1997) in the univariate case. In the second case, their method is generalized to the bivariate case and a new bivariate distribution is introduced. These new bivariate distributions have natural interpretations, and they can be applied in fatal shock models or in competing risks models. We call these new distributions as the bivariate Chen (BCH) distribution and bivariate Chen-geometric (BCHG) distribution, respectively.  Moreover, the BCH can be obtained as a special case of the BCHG model. Then, the various properties of the new distributions are investigated.  The BCHG distribution has five parameters and the maximum likelihood estimators cannot be obtained in a closed form. We suggest using an EM algorithm that is very easy to implement. Also, Monte Carlo simulations are performed to investigate the effectiveness of the proposed algorithm. Finally, we analyze two real data sets for illustrative purposes.
    Keywords: Bivariate model‎, ‎Bathtub-shaped failure rate‎, ‎EM algorithm‎, ‎Pseudo likelihood function‎, ‎Monte Carlo simulation‎
  • Vahid Rezaei Tabar*, Hosna Fathipor, Horacio Pérez, Sánchez, Farzad Eskandari, Dariusz Plewczynski Pages 89-112
    Hidden Markov models (HMM) are a ubiquitous tool for modeling time series data. The HMM can be poor at capturing dependency between observations because of the statistical assumptions it makes. Therefore, the extension of the HMM called forward-directed Autoregressive HMM (ARHMM) is considered to handle the dependencies between observations. It is also  more appropriate to use an Autoregressive Hidden Markov Model directed backward in time. In this paper, we present a sequence-level mixture of these two forms of ARHMM (called MARHMM), effectively allowing the model to choose for itself whether a forward-directed or backward-directed model or a soft combination of the two models are most appropriate for a given data set. For this purpose, we use the conditional independence relations in the context of a Bayesian network which is a probabilistic graphical model. The performance of the MARHMM is discussed by applying it to the simulated and real data sets. We show that the proposed model has greater modeling power than the conventional forward-directed ARHMM.
    Keywords: Autoregressive Hidden Markov Model‎, ‎Mixture ARHMM‎, ‎Bayesian Network
  • Mohammad Hossein Karbalaee, Seyed Moahammad M. Tabatabaey*, Mohammed Arashi Pages 113-131
    When a series of stochastic restrictions are available, we study the performance of the preliminary test generalized Liu estimators (PTGLEs) based on the Wald, likelihood ratio and Lagrangian multiplier tests. In this respect, the optimal range of the biasing parameter is obtained under the mean square error sense. For this, the minimum/maximum value of the biasing matrix components is used to give the proper optimal range, where the biasing matrix is D=diag(d1,d2,...,dp)‎, 0< i<1‎, i=1,...,p. We support our findings by some numerical illustrations.
    Keywords: Biasing matrix, Generalized Liu estimator, Multiple regression model, Preliminary test estimator, Stochastic restriction‎
  • Afsaneh Azizi, Abdolreza Sayyareh* Pages 133-155
    In recent years, bivariate lifetime distributions are often used to model reliability and survival data. In this paper, we introduce a bivariate Burr III distribution, so that the marginals have Burr III  distributions. It is observed that the joint probability density function, the joint cumulative distribution function and the joint survival distribution function can be expressed in compact forms.  We suggest to use the ECM algorithm to compute the maximum likelihood estimators of the unknown parameters. We report some simulation results and perform one data analysis for illustrative purposes.
    Keywords: Bivariate distribution‎, ‎Burr III ‎distribution‎, ‎ECM algorithm‎, ‎Pseudo likelihood
  • Arezo Hajrajabi*, ‎Afshin Fallah Pages 157-175
    This paper considers a first-order autoregressive model   with skew-normal innovations from a parametric point of view.   We develop an essential theory for computing the maximum likelihood estimation of model parameters via   an Expectation- Maximization (EM) algorithm.  Also, a Bayesian  method  is   proposed to estimate  the unknown parameters of the model.   The efficiency  and applicability  of the proposed model are   assessed  via  a simulation study and a real-world example.
    Keywords: Autoregressive model‎, ‎Bayesian inference‎, ‎EM algorithm‎, ‎Maximum‎ ‎likelihood estimator‎, ‎Skew normal innovations‎
  • Mahsa Tavakoli*, Naserreza Arghami, Malihe Abbasnejad Pages 177-190
    We introduce a new goodness of fit test for normality based on Balakrishnan-Sanghvi divergence measure. In order to estimate the divergence measure, we use a method similar to Vasicek's for estimating the Shannon entropy. Also, the test statistic based on kernel density estimation is investigated. Critical values and the power of tests are computed by Monte Carlo simulation. It is shown that the tests are consistent. Further, by comparing the power of proposed tests with other normality tests, we suggest the new entropic based test according to Balakrishnan-Sanghvi divergence measure using Vasicek method.
    Keywords: Divergence measure, Entropy, Normality test, Vasicek sample entropy
  • Farha Sultana, Yogesh Tripathi*, Manoj Kumar Rastogi Pages 191-236
    In this paper, the problem of estimating unknown parameters of a generalized half- normal distribution is considered under Type II progressive hybrid censoring which is a combination of Type II progressive and hybrid censoring schemes. We obtain maximum likelihood estimators of parameters and also construct asymptotic intervals using the observed Fisher information matrix. Further Bayes estimates are computed under the squared error loss function by applying different approximation methods. We also obtain prediction estimates and prediction intervals of censored observations. The performance of different methods is compared using Monte Carlo simulations and a real data set is analyzed for illustrative purposes.
    Keywords: Bayesian estimates‎, ‎HPD intervals‎, ‎Maximum likelihood estimates‎, ‎Tierney, Kadane method‎, ‎Type II progressive hybrid censoring‎, ‎Prediction‎
  • Akin A. Olosunde*, Ajibola Taiwo Soyinka Pages 237-252
    In point estimation of the value of a parameter, especially when the estimator under consideration has a probability density function, then the limit that the expected value of the estimator actually equaled the value of the parameter being estimated will tend towards zero for the estimator to be asymptotically unbiased. Hence, some interval about a point estimate needs to be included to accommodate for the region of an unbiased estimate. But in several occurrences when the random variable is not normally distributed as is common in practice; then the interval estimated for the location and scale parameters may be too wide to give the desired assurance. In this study, we have obtained some results on the confidence procedure for the location and scale parameters for symmetric and asymmetric exponential power distribution which is robust in the case of skewness or  cases alike: tail heavier; and or thinner than the normal distribution using pivotal quantities approach, and on the basis of a random sample of fixed size $n$. Some simulation studies and applications are also examined.
    Keywords: Shape parameter‎, ‎Short tails‎, ‎exponential power distribution‎, ‎confidence interval