使用期限租赁或*
许可形式单机和网络版
原产地美国
介质下载
适用平台window,mac,linux
科学软件网提供的软件覆盖各个学科,软件数量达1000余款,满足各高校和企事业单位的科研需求。此外,科学软件网还提供软件培训和研讨会服务,目前视频课程达68门,涵盖34款软件。
Sample-size analysis for confidence intervals in Stata 16
The new ciwidth command performs Precision and Sample Size (PrSS) analysis, which is sample-size analysis for confidence intervals (CIs). This method is used when you are planning a study and you want to optimally allocate resources when CIs are to be used for inference. Said differently, you use this method when you want to estimate the sample size required to achieve the desired precision of a CI in a planned study.
ciwidth produces sample sizes, precision, and more that are required for the
• CI for one mean
• CI for one variance
• CI for two independent means
• CI for two paired means
The control panel interface lets you select the analysis type and input assumptions to obtain desired results.
ciwidth allows results to be displayed in customizable tables and graphs.
ciwidth also provides facilities for you to add your own methods.

Frequentist analysis is entirely data-driven and strongly depends on whether or not the data
assumptions required by the model are met. On the other hand, Bayesian analysis provides a more
robust estimation approach by using not only the data at hand but also some existing information or
knowledge about model parameters.
In frequentist statistics, estimators are used to approximate the true values of the unknown parameters,
whereas Bayesian statistics provides an entire distribution of the parameters. In our example of a
prevalence of an infectious disease from What is Bayesian analysis?, frequentist analysis produced one
point estimate for the prevalence, whereas Bayesian analysis estimated the entire posterior distribution
of the prevalence based on a given sample.

Bayesian and frequentist approaches have very different philosophies about what is considered fixed
and, therefore, have very different interpretations of the results. The Bayesian approach assumes that
the observed data sample is fixed and that model parameters are random. The posterior distribution
of parameters is estimated based on the observed data and the prior distribution of parameters and is
used for inference. The frequentist approach assumes that the observed data are a repeatable random
sample and that parameters are unknown but fixed and constant across the repeated samples. The
inference is based on the sampling distribution of the data or of the data characteristics (statistics). In
other words, Bayesian analysis answers questions based on the distribution of parameters conditional
on the observed sample, whereas frequentist analysis answers questions based on the distribution of
statistics obtained from repeated hypothetical samples, which would be generated by the same process
that produced the observed sample given that parameters are unknown but fixed. Frequentist analysis
consequently requires that the process that generated the observed data is repeatable. This assumption
may not always be feasible. For example, in meta-analysis, where the observed sample represents the
collected studies of interest, one may argue that the collection of studies is a one-time experiment.

Stata 16 New in Bayesian analysis—Multiple chains, predictions, and more
Multiple chains.
Bayesian inference based on an MCMC (Markov chain Monte Carlo) sample is valid only if the Markov chain has converged. One way we can evaluate this convergence is to simulate and compare multiple chains.
The new nchains() option can be used with both the bayes: prefix and the bayesmh command. For instance, you type
. bayes, nchains(4): regress y x1 x2
and four chains will be produced. The chains will be combined to produce a more accurate final result. Before interpreting the result, however, you can compare the chains graphically to evaluate convergence. You can also evaluate convergence using the Gelman–Rubin convergence diagnostic that is now reported by bayes: regress and other Bayesian estimation commands when multiple chains are simulated. When you are concerned about noncovergence, you can investigate further using the bayesstats grubin command to obtain individual Gelman–Rubin diagnostics for each parameter in your model.
Bayesian predictions.
Bayesian predictions are simulated values from the posterior predictive distribution. These predictions are useful for checking model fit and for predicting out-of-sample observations. After you fit a model with bayesmh, you can use bayespredict to compute these simulated values or functions of them and save those in a new Stata dataset. For instance, you can type
. bayespredict (ymin:@min({_ysim})) (ymax:@max({_ysim})), saving(yminmax)
to compute minimums and maximums of the simulated values. You can then use other postestimation commands such as bayesgraph to obtain summaries of the predictions.
The dataset created by bayespredict may include thousands of simulated values for each observation in your dataset. Sometimes, you do not need all of these individual values. To instead obtain posterior summaries such as posterior means or medians, you can use bayespredict, pmean or bayespredict, pmedian. Alternatively, you may be interested in a random sample of the simulated values. You can use, for instance, bayesreps, nreps(100) to obtain 100 replicates.
Finally, you may want to evaluate model goodness of fit using posterior predictive p-values, also known as PPPs or as Bayesian predictive p-values. PPPs measure agreement between observed and replicated data and can be computed using the new bayesstats ppvalues command. For instance, using our earlier example
. bayesstats ppvalues {ymin} {ymax} using yminmax
,专注,专心是科学软件网的服务宗旨,开发的软件、传递*的技术、提供贴心的服务是我们用实际行动践行的**目标,我们会为此目标而不懈努力。
http://turntech8843.b2b168.com