فهرست مطالب

Journal of Iranian Statistical Society
Volume:1 Issue: 1, 2002

  • تاریخ انتشار: 1381/02/11
  • تعداد عناوین: 8
|
  • Zellner A Pages 1-5
  • Min Y., Agresti A Pages 7-33
    Applications in which data take nonnegative values but have a substantial proportion of values at zero occur in many disciplines. The modeling of such “clumped-at-zero” or “zero-inflated” data is challenging. We survey models that have been proposed. We consider cases in which the response for the non-zero observations is continuous and in which it is discrete. For the continuous and then the discrete case, we review models for analyzing cross-sectional data. We then summarize extensions for repeated measurement analyses (e.g., in longitudinal studies), for which the literature is still sparse. We also mention applications in which more than one clump can occur and we suggest problems for future research.
  • Fraser D.A., S. Yi G.Y Pages 55-78
    This paper develops default priors for Bayesian analysis that reproduce familiar frequentist and Bayesian analyses for models that are exponential or location. For the vector parameter case there is an information adjustment that avoids the Bayesian marginalizationparadoxes and properly targets the prior on the parameter of interest thus adjusting for any complicating nonlinearity; the details of this vector Bayesian issue will be investigated in detail elsewhere. As in wide generality a statistical model has an inference componentstructure that is approximately exponential or approximately location to third order, this provides general default prior procedures that can be described as reweighting likelihood in accord with a Jeffreys’ prior based on observed information. Two asymptotic models, that have variable and parameter of the same dimension and agree at a data point to first derivative conditional on an approximate ancillary, produce the same p-values to third order for inferences concerning scalar interest parameters. With
  • Toutenburg H., Heumann C., Nittner T., Scheid S Pages 79-109
    This paper gives a detailed overview of the problem of missing data in parametric and nonparametric regression. Theoretical basics, properties as well as simulation results may help the reader to get familiar with the common problem of incomplete data sets. Ofcourse, not all occurences can be discussed so this paper could be seen as an introduction to missing data within regression analysis and as an extension to the early paper of [19].
  • Bosq D Pages 127-142
    We discuss the classical efficiency criteria in density estimation and propose some variants. The context is a general density estimation scheme that contains the cases of i.i.d. or dependent random variables, in discrete or continuous time. Unbiased estimation,optimality and asymptotic optimality are considered. An example of a density estimator that satisfies some suggested criteria is given. Introduction he aim of this paper is to present a general approach to density estimation, iscuss the usual efficiency criteria and suggest some variants f these criteria. he general scheme is introduced in Section 2. It contains the ases of discrete or continuous random variables, i.i.d. or correlated, nd observed in discrete or continuous time. n Section 3 we describe general construction of density estima-
  • Walker S.G Pages 143-163
    This paper reviews Bayesian Nonparametric methods and discusses how parametric predictive densities can be constructed using nonparametric ideas. 1 IntroductionI will start with what might seem an unfair comment, yet it is meant to be instructive. The topic I am writing about, “Bayesian nonparametric theory”, should, in my opinion, be entitled “Bayesian theory”. This name has however been reserved for “Bayesian parametric theory”. The goal of the Bayesian, in the first instance, is the constructionof a prior probability distribution on a space of density functions. Exactly how large a set of densities the prior covers is the first choice to be made; the second is the shape of the prior. It is fair to say that the larger the set of densities, the harder it is to put a probability on