question
stringlengths
18
38.8k
source
listlengths
3
3
score
int64
4
12
dataset
stringclasses
1 value
answer
stringlengths
0
28.8k
I have two different analytical methods that can measure the concentration of a particular molecule in a matrix (for instance measure the amount of salt in water) The two methods are different, and each has it's own error. What ways exist to show the two methods are equivalent (or not). I'm thinking that plotting the results from a number of samples measured by both methods on a scatter graph is a good first step, but are there any good statistical methods ?
[ "https://stats.stackexchange.com/questions/527", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/114/" ]
5
HuggingFaceH4/stack-exchange-preferences
The simple correlation approach isn't the right way to analyze results from method comparison studies. There are (at least) two highly recommended books on this topic that I referenced at the end (1,2). Briefly stated, when comparing measurement methods we usually expect that (a) our conclusions should not depend on the particular sample used for the comparison, and (b) measurement error associated to the particular measurement instrument should be accounted for. This precludes any method based on correlations, and we shall turn our attention to variance components or mixed-effects models that allow to reflect the systematic effect of item (here, item stands for individual or sample on which data are collected), which results from (a). In your case, you have single measurements collected using two different methods (I assume that none of them might be considered as a gold standard) and the very basic thing to do is to plot the differences ($X_1-X_2$) versus the means ($(X_1+X_2)/2$); this is called a bland-altman-plot. It will allow you to check if (1) the variations between the two set of measurements are constant and (2) the variance of the difference is constant across the range of observed values. Basically, this is just a 45° rotation of a simple scatterplot of $X_1$ vs. $X_2$, and its interpretation is close to a plot of fitted vs. residuals values used in linear regression. Then, if the difference is constant (constant bias), you can compute the limit of agreement (see (3)) if the difference is not constant across the range of measurement, you can fit a linear regression model between the two methods (choose the one you want as predictor) if the variance of the differences is not constant, try to find a suitable transformation that makes the relationship linear with constant variance Other details may be found in (2), chapter 4. References Dunn, G (2004). Design and Analysis of Reliability Studies. Arnold. See the review in the International Journal of Epidemiology. Carstensen, B (2010). Comparing clinical measurement methods. Wiley. See the companion website, including R code. The original article from Bland and Altman, Statistical methods for assessing agreement between two methods of clinical measurement. Carstensen, B (2004). Comparing and predicting between several methods of measurement. Biostatistics, 5(3), 399–413.
We all know the mantra "correlation does not imply causation" which is drummed into all first year statistics students. There are some nice examples here to illustrate the idea. But sometimes correlation does imply causation. The following example is taking from this Wikipedia page For example, one could run an experiment on identical twins who were known to consistently get the same grades on their tests. One twin is sent to study for six hours while the other is sent to the amusement park. If their test scores suddenly diverged by a large degree, this would be strong evidence that studying (or going to the amusement park) had a causal effect on test scores. In this case, correlation between studying and test scores would almost certainly imply causation. Are there other situations where correlation implies causation?
[ "https://stats.stackexchange.com/questions/534", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/159/" ]
7
HuggingFaceH4/stack-exchange-preferences
Correlation is not sufficient for causation. One can get around the Wikipedia example by imagining that those twins always cheated in their tests by having a device that gives them the answers. The twin that goes to the amusement park loses the device, hence the low grade. A good way to get this stuff straight is to think of the structure of Bayesian network that may be generating the measured quantities, as done by Pearl in his book Causality. His basic point is to look for hidden variables. If there is a hidden variable that happens not to vary in the measured sample, then the correlation would not imply causation. Expose all hidden variables and you have causation.
In answering this question on discrete and continuous data I glibly asserted that it rarely makes sense to treat categorical data as continuous. On the face of it that seems self-evident, but intuition is often a poor guide for statistics, or at least mine is. So now I'm wondering: is it true? Or are there established analyses for which a transform from categorical data to some continuum is actually useful? Would it make a difference if the data were ordinal?
[ "https://stats.stackexchange.com/questions/539", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/174/" ]
6
HuggingFaceH4/stack-exchange-preferences
I will assume that a "categorical" variable actually stands for an ordinal variable; otherwise it doesn't make much sense to treat it as a continuous one, unless it's a binary variable (coded 0/1) as pointed by @Rob. Then, I would say that the problem is not that much the way we treat the variable, although many models for categorical data analysis have been developed so far--see e.g., The analysis of ordered categorical data: An overview and a survey of recent developments from Liu and Agresti--, than the underlying measurement scale we assume. My response will focus on this second point, although I will first briefly discuss the assignment of numerical scores to variable categories or levels. By using a simple numerical recoding of an ordinal variable, you are assuming that the variable has interval properties (in the sense of the classification given by Stevens, 1946). From a measurement theory perspective (in psychology), this may often be a too strong assumption, but for basic study (i.e. where a single item is used to express one's opinion about a daily activity with clear wording) any monotone scores should give comparable results. Cochran (1954) already pointed that any set of scores gives a valid test, provided that they are constructed without consulting the results of the experiment. If the set of scores is poor, in that it badly distorts a numerical scale that really does underlie the ordered classification, the test will not be sensitive. The scores should therefore embody the best insight available about the way in which the classification was constructed and used. (p. 436) (Many thanks to @whuber for reminding me about this throughout one of his comments, which led me to re-read Agresti's book, from which this citation comes.) Actually, several tests treat implicitly such variables as interval scales: for example, the $M^2$ statistic for testing a linear trend (as an alternative to simple independence) is based on a correlational approach ($M^2=(n-1)r^2$, Agresti, 2002, p. 87). Well, you can also decide to recode your variable on an irregular range, or aggregate some of its levels, but in this case strong imbalance between recoded categories may distort statistical tests, e.g. the aforementioned trend test. A nice alternative for assigning distance between categories was already proposed by @Jeromy, namely optimal scaling. Now, let's discuss the second point I made, that of the underlying measurement model. I'm always hesitating about adding the "psychometrics" tag when I see this kind of question, because the construction and analysis of measurement scales come under Psychometric Theory (Nunnally and Bernstein, 1994, for a neat overview). I will not dwell on all the models that are actually headed under the Item Response Theory, and I kindly refer the interested reader to I. Partchev's tutorial, A visual guide to item response theory, for a gentle introduction to IRT, and to references (5-8) listed at the end for possible IRT taxonomies. Very briefly, the idea is that rather than assigning arbitrary distances between variable categories, you assume a latent scale and estimate their location on that continuum, together with individuals' ability or liability. A simple example is worth much mathematical notation, so let's consider the following item (coming from the EORTC QLQ-C30 health-related quality of life questionnaire): Did you worry? which is coded on a four-point scale, ranging from "Not at all" to "Very much". Raw scores are computed by assigning a score of 1 to 4. Scores on items belonging to the same scale can then be added together to yield a so-called scale score, which denotes one's rank on the underlying construct (here, a mental health component). Such summated scale scores are very practical because of scoring easiness (for the practitioner or nurse), but they are nothing more than a discrete (ordered) scale. We can also consider that the probability of endorsing a given response category obeys some kind of a logistic model, as described in I. Partchev's tutorial, referred above. Basically, the idea is that of a kind of threshold model (which lead to equivalent formulation in terms of the proportional or cumulative odds models) and we model the odds of being in one response category rather the preceding one or the odds of scoring above a certain category, conditional on subjects' location on the latent trait. In addition, we may impose that response categories are equally spaced on the latent scale (this is the Rating Scale model)--which is the way we do by assigning regularly spaced numerical scores-- or not (this is the Partial Credit model). Clearly, we are not adding very much to Classical Test Theory, where ordinal variable are treated as numerical ones. However, we introduce a probabilistic model, where we assume a continuous scale (with interval properties) and where specific errors of measurement can be accounted for, and we can plug these factorial scores in any regression model. References S S Stevens. On the theory of scales of measurement. Science, 103: 677-680, 1946. W G Cochran. Some methods of strengthening the common $\chi^2$ tests. Biometrics, 10: 417-451, 1954. J Nunnally and I Bernstein. Psychometric Theory. McGraw-Hill, 1994 Alan Agresti. Categorical Data Analysis. Wiley, 1990. C R Rao and S Sinharay, editors. Handbook of Statistics, Vol. 26: Psychometrics. Elsevier Science B.V., The Netherlands, 2007. A Boomsma, M A J van Duijn, and T A B Snijders. Essays on Item Response Theory. Springer, 2001. D Thissen and L Steinberg. A taxonomy of item response models. Psychometrika, 51(4): 567–577, 1986. P Mair and R Hatzinger. Extended Rasch Modeling: The eRm Package for the Application of IRT Models in R. Journal of Statistical Software, 20(9), 2007.
ANOVA is equivalent to linear regression with the use of suitable dummy variables. The conclusions remain the same irrespective of whether you use ANOVA or linear regression. In light of their equivalence, is there any reason why ANOVA is used instead of linear regression? Note: I am particularly interested in hearing about technical reasons for the use of ANOVA instead of linear regression. Edit Here is one example using one-way ANOVA. Suppose, you want to know if the average height of male and females is the same. To test for your hypothesis you would collect data from a random sample of male and females (say 30 each) and perform the ANOVA analysis (i.e., sum of squares for sex and error) to decide whether an effect exists. You could also use linear regression to test for this as follows: Define: $\text{Sex} = 1$ if respondent is a male and $0$ otherwise. $$ \text{Height} = \text{Intercept} + \beta * \text{Sex} + \text{error} $$ where: $\text{error}\sim\mathcal N(0,\sigma^2)$ Then a test of whether $\beta = 0$ is a an equivalent test for your hypothesis.
[ "https://stats.stackexchange.com/questions/555", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ]
7
HuggingFaceH4/stack-exchange-preferences
As an economist, the analysis of variance (ANOVA) is taught and usually understood in relation to linear regression (e.g. in Arthur Goldberger's A Course in Econometrics). Economists/Econometricians typically view ANOVA as uninteresting and prefer to move straight to regression models. From the perspective of linear (or even generalised linear) models, ANOVA assigns coefficients into batches, with each batch corresponding to a "source of variation" in ANOVA terminology. Generally you can replicate the inferences you would obtain from ANOVA using regression but not always OLS regression. Multilevel models are needed for analysing hierarchical data structures such as "split-plot designs," where between-group effects are compared to group-level errors, and within-group effects are compared to data-level errors. Gelman's paper [1] goes into great detail about this problem and effectively argues that ANOVA is an important statistical tool that should still be taught for it's own sake. In particular Gelman argues that ANOVA is a way of understanding and structuring multilevel models. Therefore ANOVA is not an alternative to regression but as a tool for summarizing complex high-dimensional inferences and for exploratory data analysis. Gelman is a well-respected statistician and some credence should be given to his view. However, almost all of the empirical work that I do would be equally well served by linear regression and so I firmly fall into the camp of viewing it as a little bit pointless. Some disciplines with complex study designs (e.g. psychology) may find ANOVA useful. [1] Gelman, A. (2005). Analysis of variance: why it is more important than ever (with discussion). Annals of Statistics 33, 1–53. doi:10.1214/009053604000001048
This is a fairly general question: I have typically found that using multiple different models outperforms one model when trying to predict a time series out of sample. Are there any good papers that demonstrate that the combination of models will outperform a single model? Are there any best-practices around combining multiple models? Some references: Hui Zoua, Yuhong Yang "Combining time series models for forecasting" International Journal of Forecasting 20 (2004) 69– 84
[ "https://stats.stackexchange.com/questions/562", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/5/" ]
4
HuggingFaceH4/stack-exchange-preferences
Sometimes this kind of models are called an ensemble. For example this page gives a nice overview how it works. Also the references mentioned there are very useful.
Instrumental variables are becoming increasingly common in applied economics and statistics. For the uninitiated, can we have some non-technical answers to the following questions: What is an instrumental variable? When would one want to employ an instrumental variable? How does one find or choose an instrumental variable?
[ "https://stats.stackexchange.com/questions/563", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/215/" ]
6
HuggingFaceH4/stack-exchange-preferences
[The following perhaps seems a little technical because of the use of equations but it builds mainly on the arrow charts to provide the intuition which only requires very basic understanding of OLS - so don't be repulsed.] Suppose you want to estimate the causal effect of $x_i$ on $y_i$ given by the estimated coefficient for $\beta$, but for some reason there is a correlation between your explanatory variable and the error term: $$\begin{matrix}y_i &=& \alpha &+& \beta x_i &+& \epsilon_i & \\ & && & & \hspace{-1cm}\nwarrow & \hspace{-0.8cm} \nearrow \\ & & & & & corr & \end{matrix}$$ This might have happened because we forgot to include an important variable that also correlates with $x_i$. This problem is known as omitted variable bias and then your $\widehat{\beta}$ will not give you the causal effect (see here for the details). This is a case when you would want to use an instrument because only then can you find the true causal effect. An instrument is a new variable $z_i$ which is uncorrelated with $\epsilon_i$, but that correlates well with $x_i$ and which only influences $y_i$ through $x_i$ - so our instrument is what is called "exogenous". It's like in this chart here: $$\begin{matrix} z_i & \rightarrow & x_i & \rightarrow & y_i \newline & & \uparrow & \nearrow & \newline & & \epsilon_i & \end{matrix}$$ So how do we use this new variable? Maybe you remember the ANOVA type idea behind regression where you split the total variation of a dependent variable into an explained and an unexplained component. For example, if you regress your $x_i$ on the instrument, $$\underbrace{x_i}_{\text{total variation}} = \underbrace{a \quad + \quad \pi z_i}_{\text{explained variation}} \quad + \underbrace{\eta_i}_{\text{unexplained variation}}$$ then you know that the explained variation here is exogenous to our original equation because it depends on the exogenous variable $z_i$ only. So in this sense, we split our $x_i$ up into a part that we can claim is certainly exogenous (that's the part that depends on $z_i$) and some unexplained part $\eta_i$ that keeps all the bad variation which correlates with $\epsilon_i$. Now we take the exogenous part of this regression, call it $\widehat{x_i}$, $$x_i \quad = \underbrace{a \quad + \quad \pi z_i}_{\text{good variation} \: = \: \widehat{x}_i } \quad + \underbrace{\eta_i}_{\text{bad variation}}$$ and put this into our original regression: $$y_i = \alpha + \beta \widehat{x}_i + \epsilon_i$$ Now since $\widehat{x}_i$ is not correlated anymore with $\epsilon_i$ (remember, we "filtered out" this part from $x_i$ and left it in $\eta_i$), we can consistently estimate our $\beta$ because the instrument has helped us to break the correlation between the explanatory variably and the error. This was one way how you can apply instrumental variables. This method is actually called 2-stage least squares, where our regression of $x_i$ on $z_i$ is called the "first stage" and the last equation here is called the "second stage". In terms of our original picture (I leave out the $\epsilon_i$ to not make a mess but remember that it is there!), instead of taking the direct but flawed route between $x_i$ to $y_i$ we took an intermediate step via $\widehat{x}_i$ $$\begin{matrix} & & & & & \widehat{x}_i \newline & & & & \nearrow & \downarrow \newline & z_i & \rightarrow & x_i & \rightarrow & y_i \end{matrix}$$ Thanks to this slight diversion of our road to the causal effect we were able to consistently estimate $\beta$ by using the instrument. The cost of this diversion is that instrumental variables models are generally less precise, meaning that they tend to have larger standard errors. How do we find instruments? That's not an easy question because you need to make a good case as to why your $z_i$ would not be correlated with $\epsilon_i$ - this cannot be tested formally because the true error is unobserved. The main challenge is therefore to come up with something that can be plausibly seen as exogenous such as natural disasters, policy changes, or sometimes you can even run a randomized experiment. The other answers had some very good examples for this so I won't repeat this part.
Difference in differences has long been popular as a non-experimental tool, especially in economics. Can somebody please provide a clear and non-technical answer to the following questions about difference-in-differences. What is a difference-in-difference estimator? Why is a difference-in-difference estimator any use? Can we actually trust difference-in-difference estimates?
[ "https://stats.stackexchange.com/questions/564", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/215/" ]
6
HuggingFaceH4/stack-exchange-preferences
What is a difference in differences estimator Difference in differences (DiD) is a tool to estimate treatment effects comparing the pre- and post-treatment differences in the outcome of a treatment and a control group. In general, we are interested in estimating the effect of a treatment $D_i$ (e.g. union status, medication, etc.) on an outcome $Y_i$ (e.g. wages, health, etc.) as in $$Y_{it} = \alpha_i + \lambda_t + \rho D_{it} + X'_{it}\beta + \epsilon_{it}$$ where $\alpha_i$ are individual fixed effects (characteristics of individuals that do not change over time), $\lambda_t$ are time fixed effects, $X_{it}$ are time-varying covariates like individuals' age, and $\epsilon_{it}$ is an error term. Individuals and time are indexed by $i$ and $t$, respectively. If there is a correlation between the fixed effects and $D_{it}$ then estimating this regression via OLS will be biased given that the fixed effects are not controlled for. This is the typical omitted variable bias. To see the effect of a treatment we would like to know the difference between a person in a world in which she received the treatment and one in which she does not. Of course, only one of these is ever observable in practice. Therefore we look for people with the same pre-treatment trends in the outcome. Suppose we have two periods $t = 1, 2$ and two groups $s = A,B$. Then, under the assumption that the trends in the treatment and control groups would have continued the same way as before in the absence of treatment, we can estimate the treatment effect as $$\rho = (E[Y_{ist}|s=A,t=2] - E[Y_{ist}|s=A,t=1]) - (E[Y_{ist}|s=B,t=2] - E[Y_{ist}|s=B,t=1])$$ Graphically this would look something like this: You can simply calculate these means by hand, i.e. obtain the mean outcome of group $A$ in both periods and take their difference. Then obtain the mean outcome of group $B$ in both periods and take their difference. Then take the difference in the differences and that's the treatment effect. However, it is more convenient to do this in a regression framework because this allows you to control for covariates to obtain standard errors for the treatment effect to see if it is significant To do this, you can follow either of two equivalent strategies. Generate a control group dummy $\text{treat}_i$ which is equal to 1 if a person is in group $A$ and 0 otherwise, generate a time dummy $\text{time}_t$ which is equal to 1 if $t=2$ and 0 otherwise, and then regress $$Y_{it} = \beta_1 + \beta_2 (\text{treat}_i) + \beta_3 (\text{time}_t) + \rho (\text{treat}_i \cdot \text{time}_t) + \epsilon_{it}$$ Or you simply generate a dummy $T_{it}$ which equals one if a person is in the treatment group AND the time period is the post-treatment period and is zero otherwise. Then you would regress $$Y_{it} = \beta_1 \gamma_s + \beta_2 \lambda_t + \rho T_{it} + \epsilon_{it}$$ where $\gamma_s$ is again a dummy for the control group and $\lambda_t$ are time dummies. The two regressions give you the same results for two periods and two groups. The second equation is more general though as it easily extends to multiple groups and time periods. In either case, this is how you can estimate the difference in differences parameter in a way such that you can include control variables (I left those out from the above equations to not clutter them up but you can simply include them) and obtain standard errors for inference. Why is the difference in differences estimator useful? As stated before, DiD is a method to estimate treatment effects with non-experimental data. That's the most useful feature. DiD is also a version of fixed effects estimation. Whereas the fixed effects model assumes $E(Y_{0it}|i,t) = \alpha_i + \lambda_t$, DiD makes a similar assumption but at the group level, $E(Y_{0it}|s,t) = \gamma_s + \lambda_t$. So the expected value of the outcome here is the sum of a group and a time effect. So what's the difference? For DiD you don't necessarily need panel data as long as your repeated cross sections are drawn from the same aggregate unit $s$. This makes DiD applicable to a wider array of data than the standard fixed effects models that require panel data. Can we trust difference in differences? The most important assumption in DiD is the parallel trends assumption (see the figure above). Never trust a study that does not graphically show these trends! Papers in the 1990s might have gotten away with this but nowadays our understanding of DiD is much better. If there is no convincing graph that shows the parallel trends in the pre-treatment outcomes for the treatment and control groups, be cautious. If the parallel trends assumption holds and we can credibly rule out any other time-variant changes that may confound the treatment, then DiD is a trustworthy method. Another word of caution should be applied when it comes to the treatment of standard errors. With many years of data you need to adjust the standard errors for autocorrelation. In the past, this has been neglected but since Bertrand et al. (2004) "How Much Should We Trust Differences-In-Differences Estimates?" we know that this is an issue. In the paper they provide several remedies for dealing with autocorrelation. The easiest is to cluster on the individual panel identifier which allows for arbitrary correlation of the residuals among individual time series. This corrects for both autocorrelation and heteroscedasticity. For further references see these lecture notes by Waldinger and Pischke.
I'm curious if there are graphical techniques particular, or more applicable, to structural equation modeling. I guess this could fall into categories for exploratory tools for covariance analysis or graphical diagnostics for SEM model evaluation. (I'm not really thinking of path/graph diagrams here.)
[ "https://stats.stackexchange.com/questions/570", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/251/" ]
5
HuggingFaceH4/stack-exchange-preferences
I met Laura Trinchera who contributed a nice R package for PLS-path modeling, plspm. It includes several graphical output for various kind of 2- and k-block data structures. I just discovered the plotSEMM R package. It's more related to your second point, though, and is restricted to graphing bivariate relationships. As for recent references on diagnostic plot for SEMs, here are two papers that may be interesting (for the second one, I just browsed the abstract recently but cannot find an ungated version): Sanchez BN, Houseman EA, and Ryan LM. Residual-Based Diagnostics for Structural Equation Models. Biometrics (2009) 65, 104–115 Yuan KH and Hayashi K. Fitting data to model: Structural equation modeling diagnosis using two scatter plots, Psychological Methods (2010) Porzio GC and Vitale MP. Discovering interaction in Structural Equation Models through a diagnostic plot. ISI 58th World Congress (2011).
What is the preferred method for for conducting post-hocs for within subjects tests? I've seen published work where Tukey's HSD is employed but a review of Keppel and Maxwell & Delaney suggests that the likely violation of sphericity in these designs makes the error term incorrect and this approach problematic. Maxwell & Delaney provide an approach to the problem in their book, but I've never seen it done that way in any stats package. Is the approach they offer appropriate? Would a Bonferroni or Sidak correction on multiple paired sample t-tests be reasonable? An acceptable answer will provide general R code which can conduct post-hocs on simple, multiple-way, and mixed designs as produced by the ezANOVA function in the ez package, and appropriate citations that are likely to pass muster with reviewers.
[ "https://stats.stackexchange.com/questions/575", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/196/" ]
5
HuggingFaceH4/stack-exchange-preferences
I am currently writing a paper in which I have the pleasure to conduct both between and within subjects comparisons. After discussion with my supervisor we decided to run t-tests and use the pretty simple Holm-Bonferroni method (wikipedia) for correcting for alpha error cumulation. It controls for familwise error rate but has a greater power than the ordinary Bonferroni procedure. Procedure: You run the t-tests for all comparisons you want to do. You order the p-values according to their value. You test the smallest p-value against alpha / k, the second smallest against alpha /( k - 1), and so forth until the first test turns out non-significant in this sequence of tests. Cite Holm (1979) which can be downloaded via the link at wikipedia.
The AIC and BIC are both methods of assessing model fit penalized for the number of estimated parameters. As I understand it, BIC penalizes models more for free parameters than does AIC. Beyond a preference based on the stringency of the criteria, are there any other reasons to prefer AIC over BIC or vice versa?
[ "https://stats.stackexchange.com/questions/577", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/196/" ]
9
HuggingFaceH4/stack-exchange-preferences
Your question implies that AIC and BIC try to answer the same question, which is not true. The AIC tries to select the model that most adequately describes an unknown, high dimensional reality. This means that reality is never in the set of candidate models that are being considered. On the contrary, BIC tries to find the TRUE model among the set of candidates. I find it quite odd the assumption that reality is instantiated in one of the models that the researchers built along the way. This is a real issue for BIC. Nevertheless, there are a lot of researchers who say BIC is better than AIC, using model recovery simulations as an argument. These simulations consist of generating data from models A and B, and then fitting both datasets with the two models. Overfitting occurs when the wrong model fits the data better than the generating. The point of these simulations is to see how well AIC and BIC correct these overfits. Usually, the results point to the fact that AIC is too liberal and still frequently prefers a more complex, wrong model over a simpler, true model. At first glance these simulations seem to be really good arguments, but the problem with them is that they are meaningless for AIC. As I said before, AIC does not consider that any of the candidate models being tested is actually true. According to AIC, all models are approximations to reality, and reality should never have a low dimensionality. At least lower than some of the candidate models. My recommendation is to use both AIC and BIC. Most of the times they will agree on the preferred model, when they don't, just report it. If you are unhappy with both AIC and BIC and have free time to invest, look up Minimum Description Length (MDL), a totally different approach that overcomes the limitations of AIC and BIC. There are several measures stemming from MDL, like normalized maximum likelihood or the Fisher Information approximation. The problem with MDL is that its mathematically demanding and/or computationally intensive. Still, if you want to stick to simple solutions, a nice way for assessing model flexibility (especially when the number of parameters are equal, rendering AIC and BIC useless) is doing Parametric Bootstrap, which is quite easy to implement. Here is a link to a paper on it. Some people here advocate for the use of cross-validation. I personally have used it and don't have anything against it, but the issue with it is that the choice among the sample-cutting rule (leave-one-out, K-fold, etc) is an unprincipled one.
I am currently using Viterbi training for an image segmentation problem. I wanted to know what the advantages/disadvantages are of using the Baum-Welch algorithm instead of Viterbi training.
[ "https://stats.stackexchange.com/questions/581", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/99/" ]
6
HuggingFaceH4/stack-exchange-preferences
The Baum-Welch algorithm and the Viterbi algorithm calculate different things. If you know the transition probabilities for the hidden part of your model, and the emission probabilities for the visible outputs of your model, then the Viterbi algorithm gives you the most likely complete sequence of hidden states conditional on both your outputs and your model specification. The Baum-Welch algorithm gives you both the most likely hidden transition probabilities as well as the most likely set of emission probabilities given only the observed states of the model (and, usually, an upper bound on the number of hidden states). You also get the "pointwise" highest likelihood points in the hidden states, which is often slightly different from the single hidden sequence that is overall most likely. If you know your model and just want the latent states, then there is no reason to use the Baum-Welch algorithm. If you don't know your model, then you can't be using the Viterbi algorithm. Edited to add: See Peter Smit's comment; there's some overlap/vagueness in nomenclature. Some poking around led me to a chapter by Luis Javier Rodrıguez and Ines Torres in "Pattern Recognition and Image Analysis" (ISBN 978-3-540-40217-6, pp 845-857) which discusses the speed versus accuracy trade-offs of the two algorithms. Briefly, the Baum-Welch algorithm is essentially the Expectation-Maximization (EM) algorithm applied to an HMM; as a strict EM-type algorithm you're guaranteed to converge to at least a local maximum, and so for unimodal problems find the MLE. It requires two passes over your data for each step, though, and the complexity gets very big in the length of the data and number of training samples. However, you do end up with the full conditional likelihood for your hidden parameters. The Viterbi training algorithm (as opposed to the "Viterbi algorithm") approximates the MLE to achieve a gain in speed at the cost of accuracy. It segments the data and then applies the Viterbi algorithm (as I understood it) to get the most likely state sequence in the segment, then uses that most likely state sequence to re-estimate the hidden parameters. This, unlike the Baum-Welch algorithm, doesn't give the full conditional likelihood of the hidden parameters, and so ends up reducing the accuracy while saving significant (the chapter reports 1 to 2 orders of magnitude) computational time.
I have tried to reproduce some research (using PCA) from SPSS in R. In my experience, principal() function from package psych was the only function that came close (or if my memory serves me right, dead on) to match the output. To match the same results as in SPSS, I had to use parameter principal(..., rotate = "varimax"). I have seen papers talk about how they did PCA, but based on the output of SPSS and use of rotation, it sounds more like Factor analysis. Question: Is PCA, even after rotation (using varimax), still PCA? I was under the impression that this might be in fact Factor analysis... In case it's not, what details am I missing?
[ "https://stats.stackexchange.com/questions/612", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/144/" ]
7
HuggingFaceH4/stack-exchange-preferences
This question is largely about definitions of PCA/FA, so opinions might differ. My opinion is that PCA+varimax should not be called either PCA or FA, bur rather explicitly referred to e.g. as "varimax-rotated PCA". I should add that this is quite a confusing topic. In this answer I want to explain what a rotation actually is; this will require some mathematics. A casual reader can skip directly to the illustration. Only then we can discuss whether PCA+rotation should or should not be called "PCA". One reference is Jolliffe's book "Principal Component Analysis", section 11.1 "Rotation of Principal Components", but I find it could be clearer. Let $\mathbf X$ be a $n \times p$ data matrix which we assume is centered. PCA amounts (see my answer here) to a singular-value decomposition: $\mathbf X=\mathbf{USV}^\top$. There are two equivalent but complimentary views on this decomposition: a more PCA-style "projection" view and a more FA-style "latent variables" view. According to the PCA-style view, we found a bunch of orthogonal directions $\mathbf V$ (these are eigenvectors of the covariance matrix, also called "principal directions" or "axes"), and "principal components" $\mathbf{US}$ (also called principal component "scores") are projections of the data on these directions. Principal components are uncorrelated, the first one has maximally possible variance, etc. We can write: $$\mathbf X = \mathbf{US}\cdot \mathbf V^\top = \text{Scores} \cdot \text{Principal directions}.$$ According to the FA-style view, we found some uncorrelated unit-variance "latent factors" that give rise to the observed variables via "loadings". Indeed, $\widetilde{\mathbf U}=\sqrt{n-1}\mathbf{U}$ are standardized principal components (uncorrelated and with unit variance), and if we define loadings as $\mathbf L = \mathbf{VS}/\sqrt{n-1}$, then $$\mathbf X= \sqrt{n-1}\mathbf{U}\cdot (\mathbf{VS}/\sqrt{n-1})^\top =\widetilde{\mathbf U}\cdot \mathbf L^\top = \text{Standardized scores} \cdot \text{Loadings}.$$ (Note that $\mathbf{S}^\top=\mathbf{S}$.) Both views are equivalent. Note that loadings are eigenvectors scaled by the respective eigenvalues ($\mathbf{S}/\sqrt{n-1}$ are eigenvalues of the covariance matrix). (I should add in brackets that PCA$\ne$FA; FA explicitly aims at finding latent factors that are linearly mapped to the observed variables via loadings; it is more flexible than PCA and yields different loadings. That is why I prefer to call the above "FA-style view on PCA" and not FA, even though some people take it to be one of FA methods.) Now, what does a rotation do? E.g. an orthogonal rotation, such as varimax. First, it considers only $k<p$ components, i.e.: $$\mathbf X \approx \mathbf U_k \mathbf S_k \mathbf V_k^\top = \widetilde{\mathbf U}_k \mathbf L^\top_k.$$ Then it takes a square orthogonal $k \times k$ matrix $\mathbf T$, and plugs $\mathbf T\mathbf T^\top=\mathbf I$ into this decomposition: $$\mathbf X \approx \mathbf U_k \mathbf S_k \mathbf V_k^\top = \mathbf U_k \mathbf T \mathbf T^\top \mathbf S_k \mathbf V_k^\top = \widetilde{\mathbf U}_\mathrm{rot} \mathbf L^\top_\mathrm{rot},$$ where rotated loadings are given by $\mathbf L_\mathrm{rot} = \mathbf L_k \mathbf T$, and rotated standardized scores are given by $\widetilde{\mathbf U}_\mathrm{rot} = \widetilde{\mathbf U}_k \mathbf T$. (The purpose of this is to find $\mathbf T$ such that $\mathbf L_\mathrm{rot}$ became as close to being sparse as possible, to facilitate its interpretation.) Note that what is rotated are: (1) standardized scores, (2) loadings. But not the raw scores and not the principal directions! So the rotation happens in the latent space, not in the original space. This is absolutely crucial. From the FA-style point of view, nothing much happened. (A) The latent factors are still uncorrelated and standardized. (B) They are still mapped to the observed variables via (rotated) loadings. (C) The amount of variance captured by each component/factor is given by the sum of squared values of the corresponding loadings column in $\mathbf L_\mathrm{rot}$. (D) Geometrically, loadings still span the same $k$-dimensional subspace in $\mathbb R^p$ (the subspace spanned by the first $k$ PCA eigenvectors). (E) The approximation to $\mathbf X$ and the reconstruction error did not change at all. (F) The covariance matrix is still approximated equally well:$$\boldsymbol \Sigma \approx \mathbf L_k\mathbf L_k^\top = \mathbf L_\mathrm{rot}\mathbf L_\mathrm{rot}^\top.$$ But the PCA-style point of view has practically collapsed. Rotated loadings do not correspond to orthogonal directions/axes in $\mathbb R^p$ anymore, i.e. columns of $\mathbf L_\mathrm{rot}$ are not orthogonal! Worse, if you [orthogonally] project the data onto the directions given by the rotated loadings, you will get correlated (!) projections and will not be able to recover the scores. [Instead, to compute the standardized scores after rotation, one needs to multiply the data matrix with the pseudo-inverse of loadings $\widetilde{\mathbf U}_\mathrm{rot} = \mathbf X (\mathbf L_\mathrm{rot}^+)^\top$. Alternatively, one can simply rotate the original standardized scores with the rotation matrix: $\widetilde{\mathbf U}_\mathrm{rot} = \widetilde{\mathbf U} \mathbf T$.] Also, the rotated components do not successively capture the maximal amount of variance: the variance gets redistributed among the components (even though all $k$ rotated components capture exactly as much variance as all $k$ original principal components). Here is an illustration. The data is a 2D ellipse stretched along the main diagonal. First principal direction is the main diagonal, the second one is orthogonal to it. PCA loading vectors (eigenvectors scaled by the eigenvalues) are shown in red -- pointing in both directions and also stretched by a constant factor for visibility. Then I applied an orthogonal rotation by $30^\circ$ to the loadings. Resulting loading vectors are shown in magenta. Note how they are not orthogonal (!). An FA-style intuition here is as follows: imagine a "latent space" where points fill a small circle (come from a 2D Gaussian with unit variances). These distribution of points is then stretched along the PCA loadings (red) to become the data ellipse that we see on this figure. However, the same distribution of points can be rotated and then stretched along the rotated PCA loadings (magenta) to become the same data ellipse. [To actually see that an orthogonal rotation of loadings is a rotation, one needs to look at a PCA biplot; there the vectors/rays corresponding to original variables will simply rotate.] Let us summarize. After an orthogonal rotation (such as varimax), the "rotated-principal" axes are not orthogonal, and orthogonal projections on them do not make sense. So one should rather drop this whole axes/projections point of view. It would be weird to still call it PCA (which is all about projections with maximal variance etc.). From FA-style point of view, we simply rotated our (standardized and uncorrelated) latent factors, which is a valid operation. There are no "projections" in FA; instead, latent factors generate the observed variables via loadings. This logic is still preserved. However, we started with principal components, which are not actually factors (as PCA is not the same as FA). So it would be weird to call it FA as well. Instead of debating whether one "should" rather call it PCA or FA, I would suggest to be meticulous in specifying the exact used procedure: "PCA followed by a varimax rotation". Postscriptum. It is possible to consider an alternative rotation procedure, where $\mathbf{TT}^\top$ is inserted between $\mathbf{US}$ and $\mathbf V^\top$. This would rotate raw scores and eigenvectors (instead of standardized scores and loadings). The biggest problem with this approach is that after such a "rotation", scores will not be uncorrelated anymore, which is pretty fatal for PCA. One can do it, but it is not how rotations are usually being understood and applied.
There have been a few questions about statistical textbooks, such as the question Free statistical textbooks. However, I am looking for textbooks that are Open Source, for example, having an Creative Commons license. The reason is that in course material in other domains, you still want to include some text about basic statistics. In this case, it would be interesting to reuse existing material, instead of rewriting that material. Therefore, what Open Source textbooks on statistics (and perhaps machine learning) are available?
[ "https://stats.stackexchange.com/questions/614", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/107/" ]
4
HuggingFaceH4/stack-exchange-preferences
Michael Lavine: Introduction to Statistical Thought, licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License.
I see these terms being used and I keep getting them mixed up. Is there a simple explanation of the differences between them?
[ "https://stats.stackexchange.com/questions/622", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/159/" ]
7
HuggingFaceH4/stack-exchange-preferences
The likelihood function usually depends on many parameters. Depending on the application, we are usually interested in only a subset of these parameters. For example, in linear regression, interest typically lies in the slope coefficients and not on the error variance. Denote the parameters we are interested in as $\beta$ and the parameters that are not of primary interest as $\theta$. The standard way to approach the estimation problem is to maximize the likelihood function so that we obtain estimates of $\beta$ and $\theta$. However, since the primary interest lies in $\beta$ partial, profile and marginal likelihood offer alternative ways to estimate $\beta$ without estimating $\theta$. In order to see the difference denote the standard likelihood by $L(\beta, \theta|\mathrm{data})$. Maximum Likelihood Find $\beta$ and $\theta$ that maximizes $L(\beta, \theta|\mathrm{data})$. Partial Likelihood If we can write the likelihood function as: $$L(\beta, \theta|\mathrm{data}) = L_1(\beta|\mathrm{data}) L_2(\theta|\mathrm{data})$$ Then we simply maximize $L_1(\beta|\mathrm{data})$. Profile Likelihood If we can express $\theta$ as a function of $\beta$ then we replace $\theta$ with the corresponding function. Say, $\theta = g(\beta)$. Then, we maximize: $$L(\beta, g(\beta)|\mathrm{data})$$ Marginal Likelihood We integrate out $\theta$ from the likelihood equation by exploiting the fact that we can identify the probability distribution of $\theta$ conditional on $\beta$.
What is an estimator of standard deviation of standard deviation if normality of data can be assumed?
[ "https://stats.stackexchange.com/questions/631", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ]
7
HuggingFaceH4/stack-exchange-preferences
Let $X_1, ..., X_n \sim N(\mu, \sigma^2)$. As shown in this thread, the standard deviation of the sample standard deviation, $$ s = \sqrt{ \frac{1}{n-1} \sum_{i=1}^{n} (X_i - \overline{X}) }, $$ is $$ {\rm SD}(s) = \sqrt{ E \left( [E(s)- s]^2 \right) } = \sigma \sqrt{ 1 - \frac{2}{n-1} \cdot \left( \frac{ \Gamma(n/2) }{ \Gamma( \frac{n-1}{2} ) } \right)^2 } $$ where $\Gamma(\cdot)$ is the gamma function, $n$ is the sample size and $\overline{X} = \frac{1}{n} \sum_{i=1}^{n} X_i$ is the sample mean. Since $s$ is a consistent estimator of $\sigma$, this suggests replacing $\sigma$ with $s$ in the equation above to get a consistent estimator of ${\rm SD}(s)$. If it is an unbiased estimator you seek, we see in this thread that $ E(s) = \sigma \cdot \sqrt{ \frac{2}{n-1} } \cdot \frac{ \Gamma(n/2) }{ \Gamma( \frac{n-1}{2} ) } $, which, by linearity of expectation, suggests $$ s \cdot \sqrt{ \frac{n-1}{2} } \cdot \frac{\Gamma( \frac{n-1}{2} )}{ \Gamma(n/2) } $$ as an unbiased estimator of $\sigma$. All of this together with linearity of expectation gives an unbiased estimator of ${\rm SD}(s)$: $$ s \cdot \frac{\Gamma( \frac{n-1}{2} )}{ \Gamma(n/2) } \cdot \sqrt{\frac{n-1}{2} - \left( \frac{ \Gamma(n/2) }{ \Gamma( \frac{n-1}{2} ) } \right)^2 } $$
I have a data set of about 3,000 field observations. The data collected is divided into 20 variables (real numbers), 30 boolean variables, and 10 or so look up variables and one "answer" variable We have about 20,000 objects in the field, and i'm trying to produce an "answer" for the 20,000 objects based on the 3,000 observations. What are some of the available methods that incorporate booleans and look up tables? any suggestions on how i should proceed? EDIT the answer variable is a boolean as well EDIT 2 a sample of the variable data: Age of specimen length, area, volume time since last inspection height design life Lookup table material type coating type design standard design effectiveness a sample of the boolean is it inspected? is it in bad shape does it need repairs soon the answer variable which is my f(x) is: is it useable
[ "https://stats.stackexchange.com/questions/633", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/59/" ]
4
HuggingFaceH4/stack-exchange-preferences
You are decribing "categorical variables" (represented in R a factors). These can be incorporated into almost any statistical model by being assigned levels. You would need to give more detail about your particular problem in order to be advised on a particular method. Edit If the response variable has two possible outcomes, you might consider binomial or logistic regression. Note: If you're not familiar with the different kinds of variables in statistics, I suggest reading the first few chapters of Andrew Gelman's "Data Analysis Using Regression and Multilevel/Hierarchical Models" which covers this in a very understandable manner.
Provided a sample size "N" that I plan on using to forecast data. What are some of the ways to subdivide the data so that I use some of it to establish a model, and the remainder data to validate the model? I know there is no black and white answer to this, but it would be interesting to know some "rules of thumb" or usually used ratios. I know back at university, one of our professors used to say model on 60% and validate on 40%.
[ "https://stats.stackexchange.com/questions/638", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/59/" ]
4
HuggingFaceH4/stack-exchange-preferences
Well as you said there is no black and white answer. I generally don't divide the data in 2 parts but use methods like k-fold cross validation instead. In k-fold cross validation you divide your data randomly into k parts and fit your model on k-1 parts and test the errors on the left out part. You repeat the process k times leaving each part out of fitting one by one. You can take the mean error from each of the k iterations as an indication of the model error. This works really well if you want to compare the predictive power of different models. One extreme form of k-fold cross validation is the generalized cross validation where you just leave out one data point for testing and fit the model to all the remaining points. Then repeat the process n times leaving out each data point one by one. I generally prefer k-fold cross validation over the generalized cross validation ... just a personal choice
Sites like eMarketer offer general survey results about internet usage. Who else has a big set of survey results, or regularly releases them? Preferably marketing research focused. Thanks!
[ "https://stats.stackexchange.com/questions/641", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/74/" ]
4
HuggingFaceH4/stack-exchange-preferences
The best place to find survey data related to the social sciences is the ICPSR data clearinghouse: http://www.icpsr.umich.edu/icpsrweb/ICPSR/access/index.jsp Also, the 'survey' tag on Infochimps has many interesting and free data sets: http://infochimps.org/tags/survey
My father is a math enthusiast, but not interested in statistics much. It would be neat to try to illustrate some of the wonderful bits of statistics, and the CLT is a prime candidate. How would you convey the mathematical beauty and impact of the central limit theorem to a non-statistician?
[ "https://stats.stackexchange.com/questions/643", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/7/" ]
4
HuggingFaceH4/stack-exchange-preferences
To fully appreciate the CLT, it should be seen. Hence the notion of the bean machine and plenty of youtube videos for illustration.
Having just recently started teaching myself Machine Learning and Data Analysis I'm finding myself hitting a brick wall on the need for creating and querying large sets of data. I would like to take data I've been aggregating in my professional and personal life and analyze it but I'm uncertain of the best way to do the following: How should I be storing this data? Excel? SQL? ?? What is a good way for a beginner to begin trying to analyze this data? I am a professional computer programmer so the complexity is not in writing programs but more or less specific to the domain of data analysis. EDIT: Apologies for my vagueness, when you first start learning about something it's hard to know what you don't know, ya know? ;) Having said that, my aim is to apply this to two main topics: Software team metrics (think Agile velocity, quantifying risk, likelihood of a successfully completed iteration given x number of story points) Machine learning (ex. system exceptions have occurred in a given set of modules what is the likelihood that a module will throw an exception in the field, how much will that cost, what can the data tell me about key modules to improve that will get me the best bang for my buck, predict what portion of the system the user will want to use next in order to start loading data, etc).
[ "https://stats.stackexchange.com/questions/645", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/9426/" ]
5
HuggingFaceH4/stack-exchange-preferences
If you have large data sets - ones that make Excel or Notepad load slowly, then a database is a good way to go. Postgres is open-source and very well-made, and it's easy to connect with JMP, SPSS and other programs. You may want to sample in this case. You don't have to normalize the data in the database. Otherwise, CSV is sharing-friendly. Consider Apache Hive if you have 100M+ rows. In terms of analysis, here are some starting points: Describe one variable: Histogram Summary statistics (mean, range, standard deviation, min, max, etc) Are there outliers? (greater than 1.5x inter-quartile range) What sort of distribution does it follow? (normal, etc) Describe relationship between variables: Scatter Plot Correlation Outliers? check out Mahalanobis distance Mosaic plot for categorical Contingency table for categorical Predict a real number (like price): regression OLS regression or machine learning regression techniques when the technique used to predict is understandable by humans, this is called modeling. For example, a neural network can make predictions, but is generally not understandable. You can use regression to find Key Performance Indicators too. Predict class membership or probability of class membership (like passed/failed): classification logistic regression or machine learning techniques, such as SVM Put observations into "natural" groups: clustering Generally one finds "similar" observations by calculating the distance between them. Put attributes into "natural" groups: factoring And other matrix operations such as PCA, NMF Quantifying Risk = Standard Deviation, or proportion of times that "bad things" happen x how bad they are Likelihood of a successfully completed iteration given x number of story points = Logistic Regression Good luck!
I bought this book: How to Measure Anything: Finding the Value of Intangibles in Business and Head First Data Analysis: A Learner's Guide to Big Numbers, Statistics, and Good Decisions What other books would you recommend?
[ "https://stats.stackexchange.com/questions/652", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/9426/" ]
4
HuggingFaceH4/stack-exchange-preferences
I didn't find How To Measure Anything, nor Head First, particularly good. Statistics In Plain English (Urdan) is a good starter book. Once you finish that, Multivariate Data Analysis (Joseph Hair et al.) is fantastic. Good luck!
I am collecting textual data surrounding press releases, blog posts, reviews, etc of certain companies' products and performance. Specifically, I am looking to see if there are correlations between certain types and/or sources of such "textual" content with market valuations of the companies' stock symbols. Such apparent correlations can be found by the human mind fairly quickly - but that is not scalable. How can I go about automating such analysis of disparate sources?
[ "https://stats.stackexchange.com/questions/660", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/292/" ]
4
HuggingFaceH4/stack-exchange-preferences
My students do this as their class project. A few teams hit the 70%s for accuracy, with pretty small samples, which ain't bad. Let's say you have some data like this: Return Symbol News Text -4% DELL Centegra and Dell Services recognized with Outsourcing Center's... 7% MSFT Rising Service Revenues Benefit VMWare 1% CSCO Cisco Systems (CSCO) Receives 5 Star Strong Buy Rating From S&P 4% GOOG Summary Box: Google eyes more government deals 7% AAPL Sohu says 2nd-quarter net income rises 10 percent on higher... You want to predict the return based on the text. This is called Text Mining. What you do ultimately is create an enormous matrix like this: Return Centegra Rising Services Recognized... -4% 0.23 0 0.11 0.34 7% 0 0.1 0.23 0 ... That has one column for every unique word, and one row for each return, and a weighted score for each word. The score is often the TFIDF score, or relative frequency of the word in the doc. Then you run a regression and see if you can predict which words predict the return. You'll probably need to use PCA first. Book: Fundamentals of Predictive Text Mining, Weiss Software: RapidMiner with Text Plugin or R You should also do a search on Google Scholar and read up on the ins and outs. You can see my series of text mining videos here
What's the difference between probability and statistics, and why are they studied together?
[ "https://stats.stackexchange.com/questions/665", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/327/" ]
7
HuggingFaceH4/stack-exchange-preferences
The short answer to this I've heard from Persi Diaconis is the following: The problems considered by probability and statistics are inverse to each other. In probability theory we consider some underlying process which has some randomness or uncertainty modeled by random variables, and we figure out what happens. In statistics we observe something that has happened, and try to figure out what underlying process would explain those observations.
What are the main ideas, that is, concepts related to Bayes' theorem? I am not asking for any derivations of complex mathematical notation.
[ "https://stats.stackexchange.com/questions/672", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/333/" ]
6
HuggingFaceH4/stack-exchange-preferences
Bayes' theorem is a relatively simple, but fundamental result of probability theory that allows for the calculation of certain conditional probabilities. Conditional probabilities are just those probabilities that reflect the influence of one event on the probability of another. Simply put, in its most famous form, it states that the probability of a hypothesis given new data (P(H|D); called the posterior probability) is equal to the following equation: the probability of the observed data given the hypothesis (P(D|H); called the conditional probability), times the probability of the theory being true prior to new evidence (P(H); called the prior probability of H), divided by the probability of seeing that data, period (P(D); called the marginal probability of D). Formally, the equation looks like this: The significance of Bayes theorem is largely due to its proper use being a point of contention between schools of thought on probability. To a subjective Bayesian (that interprets probability as being subjective degrees of belief) Bayes' theorem provides the cornerstone for theory testing, theory selection and other practices, by plugging their subjective probability judgments into the equation, and running with it. To a frequentist (that interprets probability as limiting relative frequencies), this use of Bayes' theorem is an abuse, and they strive to instead use meaningful (non-subjective) priors (as do objective Bayesians under yet another interpretation of probability).
Oversimplifying a bit, I have about a million records that record the entry time and exit time of people in a system spanning about ten years. Every record has an entry time, but not every record has an exit time. The mean time in the system is ~1 year. The missing exit times happen for two reasons: The person has not left the system at the time the data was captured. The person's exit time was not recorded. This happens to say 50% of the records The questions of interest are: Are people spending less time in the system, and how much less time. Are more exit times being recorded, and how many. We can model this by saying that the probability that an exit gets recorded varies linearly with time, and that the time in the system has a Weibull whose parameters vary linearly with time. We can then make a maximum likelihood estimate of the various parameters and eyeball the results and deem them plausible. We chose the Weibull distribution because it seems to be used in measuring lifetimes and is fun to say as opposed to fitting the data better than say a gamma distribution. Where should I look to get a clue as to how to do this correctly? We are somewhat mathematically savvy, but not extremely statistically savvy.
[ "https://stats.stackexchange.com/questions/692", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/72/" ]
4
HuggingFaceH4/stack-exchange-preferences
The basic way to see if your data is Weibull is to plot the log of cumulative hazards versus log of times and see if a straight line might be a good fit. The cumulative hazard can be found using the non-parametric Nelson-Aalen estimator. There are similar graphical diagnostics for Weibull regression if you fit your data with covariates and some references follow. The Klein & Moeschberger text is pretty good and covers a lot of ground with model building/diagnostics for parametric and semi-parametric models (though mostly the latter). If you're working in R, Theneau's book is pretty good (I believe he wrote the survival package). It covers a lot of Cox PH and associated models, but I don't recall if it has much coverage of parametric models, like the one you're building. BTW, is this a million subjects each with one entry/exit or recurrent entry/exit events for some smaller pool of people? Are you conditioning your likelihood to account for the censoring mechanism?
I'm doing shopping cart analyses my dataset is set of transaction vectors, with the items the products being bought. When applying k-means on the transactions, I will always get some result. A random matrix would probably also show some clusters. Is there a way to test whether the clustering I find is a significant one, or that is can be very well be a coincidence. If yes, how can I do it.
[ "https://stats.stackexchange.com/questions/723", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/190/" ]
4
HuggingFaceH4/stack-exchange-preferences
Regarding shopping cart analysis, I think that the main objective is to individuate the most frequent combinations of products bought by the customers. The association rules represent the most natural methodology here (indeed they were actually developed for this purpose). Analysing the combinations of products bought by the customers, and the number of times these combinations are repeated, leads to a rule of the type ‘if condition, then result’ with a corresponding interestingness measurement. You may also consider Log-linear models in order to investigate the associations between the considered variables. Now as for clustering, here are some information that may come in handy: At first consider Variable clustering. Variable clustering is used for assessing collinearity, redundancy, and for separating variables into clusters that can be scored as a single variable, thus resulting in data reduction. Look for the varclus function (package Hmisc in R) Assessment of the clusterwise stability: function clusterboot {R package fpc} Distance based statistics for cluster validation: function cluster.stats {R package fpc} As mbq have mentioned, use the silhouette widths for assessing the best number of clusters. Watch this. Regarding silhouette widths, see also the optsil function. Estimate the number of clusters in a data set via the gap statistic For calculating Dissimilarity Indices and Distance Measures see dsvdis and vegdist EM clustering algorithm can decide how many clusters to create by cross validation, (if you can't specify apriori how many clusters to generate). Although the EM algorithm is guaranteed to converge to a maximum, this is a local maximum and may not necessarily be the same as the global maximum. For a better chance of obtaining the global maximum, the whole procedure should be repeated several times, with different initial guesses for the parameter values. The overall log-likelihood figure can be used to compare the different final configurations obtained: just choose the largest of the local maxima. You can find an implementation of the EM clusterer in the open-source project WEKA This is also an interesting link. Also search here for Finding the Right Number of Clusters in k-Means and EM Clustering: v-Fold Cross-Validation Finally, you may explore clustering results using clusterfly
What is your favorite statistical quote? This is community wiki, so please one quote per answer.
[ "https://stats.stackexchange.com/questions/726", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/223/" ]
8
HuggingFaceH4/stack-exchange-preferences
All models are wrong, but some are useful. (George E. P. Box) Reference: Box & Draper (1987), Empirical model-building and response surfaces, Wiley, p. 424. Also: G.E.P. Box (1979), "Robustness in the Strategy of Scientific Model Building" in Robustness in Statistics (Launer & Wilkinson eds.), p. 202.
I was having a look round a few things yesturday and came across Bayesian Search Theory. Thinking about this theory led me to think about a problem I was working on a few years ago regarding geological interpretation. We were looking at the geology at one specific site and it was essentially made up from two different types of rocks. Boreholes had been drilled at different locations and showed differing amounts of the two different types of rocks at different levels in the ground along with different amounts of weathering of the rock. A number of geologists looked at the available data and all came up with different interpretations. It seems to me that Bayesian Search Theory could have been used in this case, particualrly where extra data was gathered with time, to give some indication of how likely the different interpretations were. Has anyone encountered a case where Bayesian Search Theory has been used in this case. Is there a standard frameowrk for doing this? I would have thought this may be something that the oil industry may have a lot of research on because it would be applicable to the search for oil.
[ "https://stats.stackexchange.com/questions/743", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/210/" ]
4
HuggingFaceH4/stack-exchange-preferences
Though it is not generally labeled as Bayesian search theory, such methods are pretty widely used in oil exploration. There are, however, important differences in the standard examples that drive different features of their respective modeling problems. In the case of lost vessel exploration (in Bayesian search theory), we are looking for a specific point on the sea floor (one elevation), with a distributions modeling the likelihood of its resting location, and another distribution modeling the likelihood of finding the boat were it at that depth. These distributions are then guide search, and are continuously updated through the results of the guided search. Though similar, oil exploration is fraught with complicating features (multiple sampling depths, high sampling costs, variable yields, multiple geological indicators, drilling cost, etc.) that necessitate methods that go beyond what is considered in the prior example. See Learning through Oil and Gas Exploration for an overview of these complicating factors and a way to model them. So, yes, it may be said that the oil exploration problem is different in magnitude, but not kind from lost vessel exploration, and thus similar methods may be fruitfully applied. Finally, a quick literature search reveals many different modeling approaches, which is not too surprising, given the complicated nature of the problem.
I have commonly heard that LME models are more sound in the analysis of accuracy data (i.e., in psychology experiments), in that they can work with binomial and other non-normal distributions that traditional approaches (e.g., ANOVA) can't. What is the mathematical basis of LME models that allow them to incorporate these other distributions, and what are some not-overly-technical papers describing this?
[ "https://stats.stackexchange.com/questions/764", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/445/" ]
5
HuggingFaceH4/stack-exchange-preferences
One major benefit of mixed-effects models is that they don't assume independence amongst observations, and there can be a correlated observations within a unit or cluster. This is covered concisely in "Modern Applied Statistics with S" (MASS) in the first section of chapter 10 on "Random and Mixed Effects". V&R walk through an example with gasoline data comparing ANOVA and lme in that section, so it's a good overview. The R function to be used in lme in the nlme package. The model formulation is based on Laird and Ware (1982), so you can refer to that as a primary source although it's certainly not good for an introduction. Laird, N.M. and Ware, J.H. (1982) "Random-Effects Models for Longitudinal Data", Biometrics, 38, 963–974. Venables, W.N. and Ripley, B.D. (2002) "Modern Applied Statistics with S", 4th Edition, Springer-Verlag. You can also have a look at the "Linear Mixed Models" (PDF) appendix to John Fox's "An R and S-PLUS Companion to Applied Regression". And this lecture by Roger Levy (PDF) discusses mixed effects models w.r.t. a multivariate normal distribution.
What is the difference between operations research and statistical analysis?
[ "https://stats.stackexchange.com/questions/775", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/460/" ]
5
HuggingFaceH4/stack-exchange-preferences
Those are entire academic discplines so I do not think you can expect much more here than pointers to further, and more extensive, documentation as e.g. Wikipedia on Operations Research and Statistics. Let me try a personal definition which may be grossly simplifying: Operations Research is concerned with process modeling and optimisation Statistical Modeling is concerning with describing the so-called 'data generating process': find a model that describes something observed, and then do estimation, inference and possibly prediction.
I'm interested in finding as optimal of a method as I can for determining how many bins I should use in a histogram. My data should range from 30 to 350 objects at most, and in particular I'm trying to apply thresholding (like Otsu's method) where "good" objects, which I should have fewer of and should be more spread out, are separated from "bad" objects, which should be more dense in value. A concrete value would have a score of 1-10 for each object. I'd had 5-10 objects with scores 6-10, and 20-25 objects with scores 1-4. I'd like to find a histogram binning pattern that generally allows something like Otsu's method to threshold off the low scoring objects. However, in the implementation of Otsu's I've seen, the bin size was 256, and often I have many fewer data points that 256, which to me suggests that 256 is not a good bin number. With so few data, what approaches should I take to calculating the number of bins to use?
[ "https://stats.stackexchange.com/questions/798", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/476/" ]
8
HuggingFaceH4/stack-exchange-preferences
The Freedman-Diaconis rule is very robust and works well in practice. The bin-width is set to $h=2\times\text{IQR}\times n^{-1/3}$. So the number of bins is $(\max-\min)/h$, where $n$ is the number of observations, max is the maximum value and min is the minimum value. In base R, you can use: hist(x, breaks="FD") For other plotting libraries without this option (e.g., ggplot2), you can calculate binwidth as: bw <- 2 * IQR(x) / length(x)^(1/3) ### for example ##### ggplot() + geom_histogram(aes(x), binwidth = bw)
There are a lot of references in the statistic literature to "functional data" (i.e. data that are curves), and in parallel, to "high dimensional data" (i.e. when data are high dimensional vectors). My question is about the difference between the two type of data. When talking about applied statistic methodologies that apply in case 1 can be understood as a rephrasing of methodologies from case 2 through a projection into a finite dimensional subspace of a space of functions, it can be polynomes, splines, wavelet, Fourier, .... and will translate the functional problem into a finite dimensional vectorial problem (since in applied mathematic everything comes to be finite at some point). My question is: can we say that any statistical procedure that applies to functional data can also be applied (almost directly) to high dimension data and that any procedure dedicated to high dimensional data can be (almost directly) applied to functional data ? If the answer is no, can you illustrate ? EDIT/UPDATE with the help of Simon Byrne's answer: sparsity (S-sparse assumption, $l^p$ ball and weak $l^p$ ball for $p<1$) is used as a structural assumption in high dimensional statistical analysis. "smoothness" is used as a structural assumption in functional data analysis. On the other hand, inverse Fourier transform and inverse wavelet transform are transforming sparcity into smoothness, and smoothness is transformed into sparcity by wavelet and fourier transform. This make the critical difference mentionned by Simon not so critical?
[ "https://stats.stackexchange.com/questions/812", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/223/" ]
5
HuggingFaceH4/stack-exchange-preferences
Functional Data often involves different question. I've been reading Functional Data Analysis, Ramsey and Silverman, and they spend a lot of times discussing curve registration, warping functions, and estimating derivatives of curves. These tend to be very different questions than those asked by people interested in studying high-dimensional data.
I have R-scripts for reading large amounts of csv data from different files and then perform machine learning tasks such as svm for classification. Are there any libraries for making use of multiple cores on the server for R. or What is most suitable way to achieve that?
[ "https://stats.stackexchange.com/questions/825", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/480/" ]
5
HuggingFaceH4/stack-exchange-preferences
If it's on Linux, then the most straight-forward is multicore. Beyond that, I suggest having a look at MPI (especially with the snow package). More generally, have a look at: The High-Performance Computing view on CRAN. "State of the Art in Parallel Computing with R" Lastly, I recommend using the foreach package to abstract away the parallel backend in your code. That will make it more useful in the long run.
I have $N$ paired observations ($X_i$, $Y_i$) drawn from a common unknown distribution, which has finite first and second moments, and is symmetric around the mean. Let $\sigma_X$ the standard deviation of $X$ (unconditional on $Y$), and $\sigma_Y$ the same for Y. I would like to test the hypothesis $H_0$: $\sigma_X = \sigma_Y$ $H_1$: $\sigma_X \neq \sigma_Y$ Does anyone know of such a test? I can assume in first analysis that the distribution is normal, although the general case is more interesting. I am looking for a closed-form solution. Bootstrap is always a last resort.
[ "https://stats.stackexchange.com/questions/841", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/30/" ]
4
HuggingFaceH4/stack-exchange-preferences
You could use the fact that the distribution of the sample variance is a chi square distribution centered at the true variance. Under your null hypothesis, your test statistic would be the difference of two chi squared random variates centered at the same unknown true variance. I do not know whether the difference of two chi-squared random variates is an identifiable distribution but the above may help you to some extent.
Does anyone know of a variation of Fisher's Exact Test which takes weights into account? For instance sampling weights. So instead of the usual 2x2 cross table, every data point has a "mass" or "size" value weighing the point. Example data: A B weight N N 1 N N 3 Y N 1 Y N 2 N Y 6 N Y 7 Y Y 1 Y Y 2 Y Y 3 Y Y 4 Fisher's Exact Test then uses this 2x2 cross table: A\B N Y All N 2 2 4 Y 2 4 6 All 4 6 10 If we would take the weight as an 'actual' number of data points, this would result in: A\B N Y All N 4 13 17 Y 3 10 13 All 7 23 30 But that would result in much too high a confidence. One data point changing from N/Y to N/N would make a very large difference in the statistic. Plus, it wouldn't work if any weight contained fractions.
[ "https://stats.stackexchange.com/questions/856", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/506/" ]
4
HuggingFaceH4/stack-exchange-preferences
I have a suspicion that 'exact' tests and sampling weights are essentially incompatible concepts. I checked in Stata, which has good facilities for sample surveys and reasonable ones for exact tests, and its 8 possible test statistics for a crosstab with sample weights don't include any 'exact' tests such as Fisher's. The relevant Stata manual entry (for svy: tabulate twoway) advises using its default test in all cases. This default method is based on the usual Pearson's chi-squared statistic. To quote: "To account for the survey design, the statistic is turned into an F statistic with noninteger degrees of freedom by using a second-order Rao and Scott (1981, 1984) correction". Refs: Rao, J. N. K., and A. J. Scott. 1981. The analysis of categorical data from complex sample surveys: Chi-squared tests for goodness of fit and independence in two-way tables. Journal of the American Statistical Association 76:221–230. Rao, J. N. K., and A. J. Scott. 1984. On chi-squared tests for multiway contingency tables with cell proportions estimated from survey data. Annals of Statistics 12: 46–60.
Say I want to estimate a large number of parameters, and I want to penalize some of them because I believe they should have little effect compared to the others. How do I decide what penalization scheme to use? When is ridge regression more appropriate? When should I use lasso?
[ "https://stats.stackexchange.com/questions/866", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/455/" ]
8
HuggingFaceH4/stack-exchange-preferences
Keep in mind that ridge regression can't zero out coefficients; thus, you either end up including all the coefficients in the model, or none of them. In contrast, the LASSO does both parameter shrinkage and variable selection automatically. If some of your covariates are highly correlated, you may want to look at the Elastic Net [3] instead of the LASSO. I'd personally recommend using the Non-Negative Garotte (NNG) [1] as it's consistent in terms of estimation and variable selection [2]. Unlike LASSO and ridge regression, NNG requires an initial estimate that is then shrunk towards the origin. In the original paper, Breiman recommends the least-squares solution for the initial estimate (you may however want to start the search from a ridge regression solution and use something like GCV to select the penalty parameter). In terms of available software, I've implemented the original NNG in MATLAB (based on Breiman's original FORTRAN code). You can download it from: http://www.emakalic.org/blog/wp-content/uploads/2010/04/nngarotte.zip BTW, if you prefer a Bayesian solution, check out [4,5]. References: [1] Breiman, L. Better Subset Regression Using the Nonnegative Garrote Technometrics, 1995, 37, 373-384 [2] Yuan, M. & Lin, Y. On the non-negative garrotte estimator Journal of the Royal Statistical Society (Series B), 2007, 69, 143-161 [3] Zou, H. & Hastie, T. Regularization and variable selection via the elastic net Journal of the Royal Statistical Society (Series B), 2005, 67, 301-320 [4] Park, T. & Casella, G. The Bayesian Lasso Journal of the American Statistical Association, 2008, 103, 681-686 [5] Kyung, M.; Gill, J.; Ghosh, M. & Casella, G. Penalized Regression, Standard Errors, and Bayesian Lassos Bayesian Analysis, 2010, 5, 369-412
I created a quick fun Excel Spreadsheet tonight to try and predict which video games I'll enjoy if I buy them. I'm wondering if this quick example makes sense from a Logistic Regression perspective and if I am computing all of the values correctly. Unfortunately, if I did everything correctly I doubt I have much to look forward to on my XBOX or PS3 ;) I laid out a few categories and weighted them like so (Real spreadsheet lists twice as many or so): 4                   4             3         1 Visually Stunning   Exhilirating  Artistic  Sporty Then I went through some games I have and rated them in each category (ratings of 0-4). I then set a separate cell to be the value of Beta_0 and tuned that until the resulting percentages all looked about right. Next I entered in my expected ratings for the new games I was looking forward to and got percentages for those. Example: Beta_0 := -35 4                   4             3         1 Visually Stunning   Exhilirating  Artistic  Sporty 4                  4             0         1 Would be calculated as P = 1 / [1 + e^(-35 + (4*4 + 4*4 + 3*0 + 1*1)] P = 88.1% If I were to automate the regression am I correct in thinking I'd be tuning Beta_0 to make it so the positive training examples come out high and the negative training examples come out low? I'm completely new to this (just started today thanks to this site actually!) so please have no concern about bruising my ego, I'm eager to learn more. Thanks!
[ "https://stats.stackexchange.com/questions/868", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/9426/" ]
4
HuggingFaceH4/stack-exchange-preferences
Like drknexus said, for a logistic regression, your outcome measure needs to be 0 and 1. I'd go back and recode your outcome as 0 (didn't like it), or 1 (did like it). Then, abandon excel and load the data into R (it's really not as intimidating as it looks). Your regression will look something like this: glm(Liked ~ Visually.Stunning + Exhilarating + Artistic + Sporty, family = binomial, data = data) The regression will return betas for each feature in terms of log-odds. So, for every 1 point increase in Artistic, for instance, you'll have a value for how much that increases or decreases the log-odds of your enjoyment. Most of the betas will be positive, unless you dislike sporty games or something. Now, you'll have to ask yourself some interesting questions. The assumption of the model is that the values on each of these scores affect your enjoyment independently, which probably isn't true! A game that is very Visually.Stunning and Exhilarating is probably way better than you would expect given those component parts. And it's probably the case that if a game gets scores of 1 on all features except Sporty, which gets a 4, that high Sporty score is worth less than if the other scores were higher. That is, many or all of your features probably interact. To fit an accurate model, then, you'll want to add in these interactions. That formula would look like this: glm(Liked ~ Visually.Stunning * Exhilarating * Artistic * Sporty, family = binomial, data = data) Now, there are two points of difficulty here. First, you need to have more data to fit a good model with this many interactions than the pure independence model. Second, you risk overfitting, which means that the model will very accurately describe the original data, but will be less good at making accurate predictions for future data. Needless to say, some people spend all day fitting and refitting models like this one.
I realize this is pedantic and trite, but as a researcher in a field outside of statistics, with limited formal education in statistics, I always wonder if I'm writing "p-value" correctly. Specifically: Is the "p" supposed to be capitalized? Is the "p" supposed to be italicized? (Or in mathematical font, in TeX?) Is there supposed to be a hyphen between "p" and "value"? Alternatively, is there no "proper" way of writing "p-value" at all, and any dolt will understand what I mean if I just place "p" next to "value" in some permutation of these options?
[ "https://stats.stackexchange.com/questions/871", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/520/" ]
6
HuggingFaceH4/stack-exchange-preferences
There do not appear to be "standards". For example: The Nature style guide refers to "P value" This APA style guide refers to "p value" The Blood style guide says: Capitalize and italicize the P that introduces a P value Italicize the p that represents the Spearman rank correlation test Wikipedia uses "p-value" (with hyphen and italicized "p") My brief, unscientific survey suggests that the most common combination is lower-case, italicized p without a hyphen.
The 'fundamental' idea of statistics for estimating parameters is maximum likelihood. I am wondering what is the corresponding idea in machine learning. Qn 1. Would it be fair to say that the 'fundamental' idea in machine learning for estimating parameters is: 'Loss Functions' [Note: It is my impression that machine learning algorithms often optimize a loss function and hence the above question.] Qn 2: Is there any literature that attempts to bridge the gap between statistics and machine learning? [Note: Perhaps, by way of relating loss functions to maximum likelihood. (e.g., OLS is equivalent to maximum likelihood for normally distributed errors etc)]
[ "https://stats.stackexchange.com/questions/886", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ]
5
HuggingFaceH4/stack-exchange-preferences
If statistics is all about maximizing likelihood, then machine learning is all about minimizing loss. Since you don't know the loss you will incur on future data, you minimize an approximation, ie empirical loss. For instance, if you have a prediction task and are evaluated by the number of misclassifications, you could train parameters so that resulting model produces the smallest number of misclassifications on the training data. "Number of misclassifications" (ie, 0-1 loss) is a hard loss function to work with because it's not differentiable, so you approximate it with a smooth "surrogate". For instance, log loss is an upper bound on 0-1 loss, so you could minimize that instead, and this will turn out to be the same as maximizing conditional likelihood of the data. With parametric model this approach becomes equivalent to logistic regression. In a structured modeling task, and log-loss approximation of 0-1 loss, you get something different from maximum conditional likelihood, you will instead maximize product of (conditional) marginal likelihoods. To get better approximation of loss, people noticed that training model to minimize loss and using that loss as an estimate of future loss is an overly optimistic estimate. So for more accurate (true future loss) minimization they add a bias correction term to empirical loss and minimize that, this is known as structured risk minimization. In practice, figuring out the right bias correction term may be too hard, so you add an expression "in the spirit" of the bias correction term, for instance, sum of squares of parameters. In the end, almost all parametric machine learning supervised classification approaches end up training the model to minimize the following $\sum_{i} L(\textrm{m}(x_i,w),y_i) + P(w)$ where $\textrm{m}$ is your model parametrized by vector $w$, $i$ is taken over all datapoints $\{x_i,y_i\}$, $L$ is some computationally nice approximation of your true loss and $P(w)$ is some bias-correction/regularization term For instance if your $x \in \{-1,1\}^d$, $y \in \{-1,1\}$, a typical approach would be to let $\textrm{m}(x)=\textrm{sign}(w \cdot x)$, $L(\textrm{m}(x),y)=-\log(y \times (x \cdot w))$, $P(w)=q \times (w \cdot w)$, and choose $q$ by cross validation
What is the difference between offline and online learning? Is it just a matter of learning over the entire dataset (offline) vs. learning incrementally (one instance at a time)? What are examples of algorithms used in both?
[ "https://stats.stackexchange.com/questions/897", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/284/" ]
7
HuggingFaceH4/stack-exchange-preferences
Online learning means that you are doing it as the data comes in. Offline means that you have a static dataset. So, for online learning, you (typically) have more data, but you have time constraints. Another wrinkle that can affect online learning is that your concepts might change through time. Let's say you want to build a classifier to recognize spam. You can acquire a large corpus of e-mail, label it, and train a classifier on it. This would be offline learning. Or, you can take all the e-mail coming into your system, and continuously update your classifier (labels may be a bit tricky). This would be online learning.
Comparing two variables, I came up with the following chart. the x, y pairs represent independent observations of data on the field. I've doen Pearson correlation on it and have found one of 0.6. My end goal is to establish a relationship between y and x such that y = f(x). What analaysis would you recommend to obtain some form ofa relationship between the two variables? Graph http://koopics.com/ask_math_chart.jpg
[ "https://stats.stackexchange.com/questions/913", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/59/" ]
4
HuggingFaceH4/stack-exchange-preferences
Normality seems to be strongly violated at least by your y variable. I would log transform y to see if that cleans things up a bit. Then, fit a regression to log(y) ~ x. The formula the regression will return will be of the form log(y) = \alpha + \beta*x which you can transform back to the original scale by y = exp(\alpha + \beta*x)
For 1,000,000 observations, I observed a discrete event, X, 3 times for the control group and 10 times for the test group. How do I determine for a large number of observations (1,000,000), if three is statistically different than ten?
[ "https://stats.stackexchange.com/questions/924", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/559/" ]
4
HuggingFaceH4/stack-exchange-preferences
I think a simple chi-squared test will do the trick. Do you have 1,000,000 observations for both control and test? If so, your table of observations will be (in R code) Edit: Woops! Left off a zero! m <- rbind(c(3, 1000000-3), c(10, 1000000-10)) # [,1] [,2] # [1,] 3 999997 # [2,] 10 999990 And chi-squared test will be chisq.test(m) Which returns chi-squared = 2.7692, df = 1, p-value = 0.0961, which is not statistically significant at the p < 0.05 level. I'd be surprised if these could be clinically significant anyway.
What are some podcasts related to statistical analysis? I've found some audio recordings of college lectures on ITunes U, but I'm not aware of any statistical podcasts. The closest thing I'm aware of is an operations research podcast The Science of Better. It touches on statistical issues, but it's not specifically a statistical show.
[ "https://stats.stackexchange.com/questions/927", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/319/" ]
5
HuggingFaceH4/stack-exchange-preferences
BBC's More or Less is often concerned with numeracy and statistical literacy issues. But it's not specifically about statistics. Their About page has some background. More or Less is devoted to the powerful, sometimes beautiful, often abused but ever ubiquitous world of numbers. The programme was an idea born of the sense that numbers were the principal language of public argument. [...]
This one is bothering me for a while, and a great dispute was held around it. In psychology (as well as in other social sciences), we deal with different ways of dealing with numbers :-) i.e. the levels of measurement. It's also common practice in psychology to standardize some questionnaire, hence transform the data into percentile scores (in order to assess a respondent's position within the representative sample). Long story short, if you have a variable that holds the data expressed in percentile scores, how should you treat it? As an ordinal, interval, or even ratio variable?! It's not ratio, cause there no real 0 (0th percentile doesn't imply absence of measured property, but the variable's smallest value). I advocate the view that percentile scores are ordinal, since P70 - P50 is not equal to P50 - P30, while the other side says it's interval. Please gentlemen, cut the cord. Ordinal or interval?
[ "https://stats.stackexchange.com/questions/928", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1356/" ]
4
HuggingFaceH4/stack-exchange-preferences
Background to understand my answer The critical property that distinguishes between ordinal and interval scale is whether we can take ratio of differences. While you cannot take ratio of direct measures for either scale the ratio of differences is meaningful for interval but not ordinal (See: http://en.wikipedia.org/wiki/Level_of_measurement#Interval_scale). Temperature is the classic example for an interval scale. Consider the following: 80 f = 26.67 c 40 f = 4.44 c and 20 f = -6.67 c Differences between the first and the second is: 40 f and 22.23 c Difference between the second and the third is: 20 f and 11.11 c Notice that the ratio is the same irrespective of the scale on which we measure temperature. A classic example of ordinal data is ranks. If three teams, A, B, and C are ranked 1st, 2nd, and 4th, respectively, then a statement like so does not make sense: "Team A's difference in strength vis-a-vis team B is half of team B's difference in strength relative to team C." Answer to your question Is ratio of differences in percentiles meaningful? In other words, is the ratio of difference in percentiles invariant to the underlying scale? Consider, for example: (P70-P50) / (P50-P30)? Suppose that these percentiles are based on an underlying score between 0-100 and we compute the above ratio. Clearly, we would obtain the same ratio of percentile differences under arbitrary linear transformation of the score (e.g., multiply all scores by 10 so that the range is between 0-1000 and compute the percentiles). Thus, my answer: Interval
I've been beginning to work my way through Statistical Data Mining Tutorials by Andrew Moore (highly recommended for anyone else first venturing into this field). I started by reading this extremely interesting PDF entitled "Introductory overview of time-series-based anomaly detection algorithms" in which Moore traces through many of the techniques used in the creation of an algorithm to detect disease outbreaks. Halfway through the slides, on page 27, he lists a number of other "state of the art methods" used to detect outbreaks. The first one listed is wavelets. Wikipeida describes a wavelet as a wave-like oscillation with an amplitude that starts out at zero, increases, and then decreases back to zero. It can typically be visualized as a "brief oscillation" but does not describe their application to statistics and my Google searches yield highly academic papers that assume a knowledge of how wavelets relate to statistics or full books on the subject. I would like a basic understanding of how wavelets are applied to time-series anomaly detection, much in the way Moore illustrates the other techniques in his tutorial. Can someone provide an explanation of how detection methods using wavelets work or a link to an understandable article on the matter?
[ "https://stats.stackexchange.com/questions/942", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/75/" ]
5
HuggingFaceH4/stack-exchange-preferences
Wavelets are useful to detect singularities in a signal (see for example the paper here (see figure 3 for an illustration) and the references mentioned in this paper. I guess singularities can sometimes be an anomaly? The idea here is that the Continuous wavelet transform (CWT) has maxima lines that propagates along frequencies, i.e. the longer the line is, the higher is the singularity. See Figure 3 in the paper to see what I mean! note that there is free Matlab code related to that paper, it should be here. Additionally, I can give you some heuristics detailing why the DISCRETE (preceding example is about the continuous one) wavelet transform (DWT) is interesting for a statistician (excuse non-exhaustivity) : There is a wide class of (realistic (Besov space)) signals that are transformed into a sparse sequence by the wavelet transform. (compression property) A wide class of (quasi-stationary) processes that are transformed into a sequence with almost uncorrelated features (decorrelation property) Wavelet coefficients contain information that is localized in time and in frequency (at different scales). (multi-scale property) Wavelet coefficients of a signal concentrate on its singularities.
When I type a left paren or any quote in the R console, it automatically creates a matching one to the right of my cursor. I guess the idea is that I can just type the expression I want inside without having to worry about matching, but I find it annoying, and would rather just type it myself. How can I disable this feature? I am using R 2.8.0 on OSX 10.5.8.
[ "https://stats.stackexchange.com/questions/944", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ]
4
HuggingFaceH4/stack-exchange-preferences
On OSX, go to R > Preferences > Editor and deselect Match braces/quotes
Take $x \in \{0,1\}^d$ and $y \in \{0,1\}$ and suppose we model the task of predicting y given x using logistic regression. When can logistic regression coefficients be written in closed form? One example is when we use a saturated model. That is, define $P(y|x) \propto \exp(\sum_i w_i f_i(x_i))$, where $i$ indexes sets in the power-set of $\{x_1,\ldots,x_d\}$, and $f_i$ returns 1 if all variables in the $i$'th set are 1, and 0 otherwise. Then you can express each $w_i$ in this logistic regression model as a logarithm of a rational function of statistics of the data. Are there other interesting examples when closed form exists?
[ "https://stats.stackexchange.com/questions/949", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/511/" ]
5
HuggingFaceH4/stack-exchange-preferences
As kjetil b halvorsen pointed out, it is, in its own way, a miracle that linear regression admits an analytical solution. And this is so only by virtue of the linearity of the problem (with respect to the parameters). In OLS, you have $$ \sum_i (y_i - x_i \beta)^2 \to \min_\beta, $$ which has the first order conditions $$ -2 \sum_i (y_i - x_i\beta) x_i = 0 $$ For a problem with $p$ variables (including constant, if needed— there are some regression through the origin problems, too), this is a system with $p$ equations and $p$ unknowns. Most importantly, it is a linear system, so you can find a solution using the standard linear algebra theory and practice. This system will have a solution with probability 1 unless you have perfectly collinear variables. Now, with logistic regression, things aren't that easy anymore. Writing down the log-likelihood function, $$ l(y;x,\beta) = \sum_i y_i \ln p_i + (1-y_i) \ln(1-p_i), \quad p_i = (1+\exp(-\theta_i))^{-1}, \quad \theta_i = x_i \beta, $$ and taking its derivative to find the MLE, we get $$ \frac{\partial l}{\partial \beta'} = \sum_i \frac{{\rm d}p_i}{{\rm d}\theta}\Bigl( \frac{y_i}{p_i} - \frac{1-y_i}{1-p_i} \Bigr)x_i = \sum_i \Bigl[y_i-\frac1{1+\exp(x_i\beta)}\Bigr]x_i $$ The parameters $\beta$ enter this in a very nonlinear way: for each $i$, there's a nonlinear function, and they are added together. There is no analytical solution (except probably in a trivial situation with two observations, or something like that), and you have to use nonlinear optimization methods to find the estimates $\hat\beta$. A somewhat deeper look into the problem (taking the second derivative) reveals that this is a convex optimization problem of finding a maximum of a concave function (a glorified multivariate parabola), so either one exists, and any reasonable algorithm should be finding it rather quickly, or things blow off to infinity. The latter does happen to logistic regression when ${\rm Prob}[Y_i=1|x_i\beta > c] = 1$ for some $c$, i.e., you have a perfect prediction. This is a rather unpleasant artifact: you would think that when you have a perfect prediction, the model works perfectly, but curiously enough, it is the other way round.
I have a dataset made up of elements from three groups, let's call them G1, G2, and G3. I analysed certain characteristics of these elements and divided them into 3 types of "behaviour" T1, T2, and T3 (I used cluster analysis to do that). So, now I have a 3 x 3 contingency table like this with the counts of elements in the three groups divided by type: | T1 | T2 | T3 | ------+---------+---------+---------+--- G1 | 18 | 15 | 65 | ------+---------+---------+---------+--- G2 | 20 | 10 | 70 | ------+---------+---------+---------+--- G3 | 15 | 55 | 30 | Now, I can run a Fisher test on these data in R data <- matrix(c(18, 20, 15, 15, 10, 55, 65, 70, 30), nrow=3) fisher.test(data) and I get Fisher's Exact Test for Count Data data: data p-value = 9.028e-13 alternative hypothesis: two.sided So my questions are: is it correct to use Fisher test this way? how do I know who is different from who? Is there a post-hoc test I can use? Looking at the data I would say the 3rd group has a different behaviour from the first two, how do I show that statistically? someone pointed me to logit models: are they a viable option for this type of analysis? any other option to analyse this type of data? Thank you a lot nico
[ "https://stats.stackexchange.com/questions/961", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/582/" ]
5
HuggingFaceH4/stack-exchange-preferences
At first I think that the Fisher test is used correctly. Count data are better handled using log-linear models (not logit, to ensure that the fitted values are bounded below). In R you can specify family=poisson (which sets errors = Poisson and link = log). The log link ensures that all the fitted values are positive, while the Poisson errors take account of the fact that the data are integer and have variances that are equal to their means. e.g. glm(y~x,poisson) and the model is fitted with a log link and Poisson errors (to account for the non-normality). In cases where there is overdispersion (the residual deviance should be equal to the residual degrees of freedom, if the Poisson errors assumption is appropriate), instead of using quasipoisson as the error family, you could fit a negative binomial model. (This involves the function glm.nb from package MASS) In your case you could fit and compare models using commands like the following: observed <- as.vector(data) Ts<-factor(rep(c("T1","T2","T3"),each=3)) Gs<-factor(rep(c("G1","G2","G3"),3)) model1<-glm(observed~Ts*Gs,poisson) #or and a model without the interaction terms model2<-glm(observed~Ts+Gs,poisson) #you can compare the two models using anova with a chi-squared test anova(model1,model2,test="Chi") summary(model1) Always make sure that your minimal model contains all the nuisance variables. As for how do we know who is different from who, there are some plots that may help you. R function assocplot produces an association plot indicating deviations from independence of rows and columns in a two dimensional contingency table. Here are the same data plotted as a mosaic plot mosaicplot(data, shade = TRUE)
When we are monitoring movements of structures we normally install monitoring points onto the structure before we do any work which might cause movement. This gives us chance to take a few readings before we start doing the work to 'baseline' the readings. Quite often the data is quite variable (the variations in the reading can easily be between 10 and 20% of the fianl movement). The measurements are also often affected by the environment in which they are taken so one set of measurements taken on one project may not have the same accuracy as measurements on another project. Is there any statisitcal method, or rule of thumb that can be applied to say how many baseline readings need to be taken to give a certain accuracy before the first reading is taken? Are there any rules of humb that can be applied to this situation?
[ "https://stats.stackexchange.com/questions/977", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/210/" ]
4
HuggingFaceH4/stack-exchange-preferences
I think you should look at power calculations. These are often used to decide the sample size of survey or clinical trial. Taken from wikipedia: A priori power analysis is conducted prior to the research study, and is typically used to determine an appropriate sample size to achieve adequate power.
I haven't studied statistics for over 10 years (and then just a basic course), so maybe my question is a bit hard to understand. Anyway, what I want to do is reduce the number of data points in a series. The x-axis is number of milliseconds since start of measurement and the y-axis is the reading for that point. Often there is thousands of data points, but I might only need a few hundreds. So my question is: How do I accurately reduce the number of data points? What is the process called? (So I can google it) Are there any prefered algorithms (I will implement it in C#) Hope you got some clues. Sorry for my lack of proper terminology. Edit: More details comes here: The raw data I got is heart rate data, and in the form of number of milliseconds since last beat. Before plotting the data I calculate number of milliseconds from first sample, and the bpm (beats per minute) at each data point (60000/timesincelastbeat). I want to visualize the data, i.e. plot it in a line graph. I want to reduce the number of points in the graph from thousands to some hundreds. One option would be to calculate the average bpm for every second in the series, or maybe every 5 seconds or so. That would have been quite easy if I knew I would have at least one sample for each of those periods (seconds of 5-seconds-intervals).
[ "https://stats.stackexchange.com/questions/980", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ]
4
HuggingFaceH4/stack-exchange-preferences
You have two problems: too many points and how to smooth over the remaining points. Thinning your sample If you have too many observations arriving in real time, you could always use simple random sampling to thin your sample. Note, for this too be true, the number of points would have to be very large. Suppose you have N points and you only want n of them. Then generate n random numbers from a discrete uniform U(0, N-1) distribution. These would be the points you use. If you want to do this sequentially, i.e. at each point you decide to use it or not, then just accept a point with probability p. So if you set p=0.01 you would accept (on average) 1 point in a hundred. If your data is unevenly spread and you only want to thin dense regions of points, then just make your thinning function a bit more sophisticated. For example, instead of p, what about: $$1-p \exp(-\lambda t)$$ where $\lambda$ is a positive number and $t$ is the time since the last observation. If the time between two points is large, i.e. large $t$, the probability of accepting a point will be one. Conversely, if two points are close together, the probability of accepting a point will be $1-p$. You will need to experiment with values of $\lambda$ and $p$. Smoothing Possibly something like a simple moving average type scheme. Or you could go for something more advanced like a kernel smoother (as others suggested). You will need to be careful that you don't smooth too much, since I assume that a sudden drop should be picked up very quickly in your scenario. There should be C# libraries available for this sort of stuff. Conclusion Thin if necessary, then smooth.
I have distributions from two different data sets and I would like to measure how similar their distributions (in terms of their bin frequencies) are. In other words, I am not interested in the correlation of data point sequences but rather in the their distributional properties with respect to similarity. Currently I can only observe a similarity in eye-balling which is not enough. I don't want to assume causality and I don't want to predict at this point. So, I assume that correlation is the way to go. Spearman's Correlation Coefficient is used to compare non-normal data and since I don't know anything about the real underlying distribution in my data, I think it would be a save bet. I wonder if this measure can also be used to compare distributional data rather than the data poitns that are summarized in a distribution. Here the example code in R that exemplifies what I would like to check: aNorm <- rnorm(1000000) bNorm <- rnorm(1000000) cUni <- runif(1000000) ha <- hist(aNorm) hb <- hist(bNorm) hc <- hist(cUni) print(ha$counts) print(hb$counts) print(hc$counts) # relatively similar n <- min(c(NROW(ha$counts),NROW(hb$counts))) cor.test(ha$counts[1:n], hb$counts[1:n], method="spearman") # quite different n <- min(c(NROW(ha$counts),NROW(hc$counts))) cor.test(ha$counts[1:n], hc$counts[1:n], method="spearman") Does this make sense or am I violating some assumptions of the coefficient? Thanks, R.
[ "https://stats.stackexchange.com/questions/1001", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/608/" ]
4
HuggingFaceH4/stack-exchange-preferences
For measuring the bin frequencies of two distributions, a pretty good test is the Chi Square test. It is exactly what it is designed for. And, it is even nonparametric. The distribution don't even have to be normal or symmetric. It is much better than the Kolmogorov-Smirnov test that is known to be weak in fitting the tails of the distribution where the fitting or diagnosing is often the most important. Spearman's correlation won't be so precise in terms of capturing the similarities of your actual bin frequencies. It will just tell you that your overall ranking of observations for the two distributions are similar. Instead, when calculating the Chi Square test (long hand so to speak) you will be able to observe readily which bin frequencies differentials are the most responsible for driving down the overall p value of the Chi Square test. Another pretty good test is the Anderson-Darling test. It is one of the best tests to diagnose the fit between two distributions. However, in terms of giving information about the specific bin frequencies I suspect that the Chi Square test gives you more information.
A colleague wants to compare models that use either a Gaussian distribution or a uniform distribution and for other reasons needs the standard devation of these two distributions to be equal. In R I can do a simulation... sd(runif(100000000)) sd(runif(100000000,min=0,max=2)) and see that the calculated standard deviation is likely to be ~.2887 * the range of the uniform distribution. However, I was wondering if there was an equation that could yield the exact value, and if so, what that formula was.
[ "https://stats.stackexchange.com/questions/1012", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/196/" ]
5
HuggingFaceH4/stack-exchange-preferences
In general, the standard deviation of a continous uniform distribution is (max - min) / sqrt(12).
I am trying to calculate the reliability in an elicitation exercise by analysing some test-retest questions given to the experts. The experts elicited a series of probability distributions which were then compared with the true value (found at a later date) by computing the standardized quadratic scores. These scores are the values that I am using to calculate the reliability between the test-retest results. Which reliability method would be appropriate here? I was looking mostly at Pearson's correlation and Chronbach's alpha (and got some negative values using both methods) but I am not sure this is the right approach. UPDATE: Background information The data were collected from a number of students who were asked to predict their own actual exam mark in four chosen modules by giving a probability distribution of the marks. One module was then repeated at a later date (hence the test-retest exercise). Once the exam was taken, and the real results were available, the standardized quadratic scores were computed. These scores are proper scoring rules used to compare assessed probability distributions with the observed data which might be known at a later stage. The probability score Q is defined as: Quadratic score http://img717.imageshack.us/img717/9424/chart2j.png where k is the total number of elicited probabilities and j is the true outcome. My question is which reliability method would be more appropriate when it comes to assessing the reliability between the scores of the repeated modules? I calculated Pearson's correlation and Chronbach's alpha (and got some negative values using both methods) but there might be a better approach.
[ "https://stats.stackexchange.com/questions/1015", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/108/" ]
4
HuggingFaceH4/stack-exchange-preferences
Maybe I misunderstood the question, but what you are describing sounds like a test-retest reliability study on your Q scores. You have a series of experts each going to assess a number of items or questions, at two occasions (presumably fixed in time). So, basically you can assess the temporal stability of the judgments by computing an intraclass correlation coefficient (ICC), which will give you an idea of the variance attributable to subjects in the variability of observed scores (or, in other words of the closeness of the observations on the same subject relative to the closeness of observations on different subjects). The ICC may easily be obtained from a mixed-effect model describing the measurement $y_{ij}$ of subject $i$ on occasion $j$ as $$ y_{ij}=\mu+u_i+\varepsilon_{ij},\quad \varepsilon\sim\mathcal{N}(0,\sigma^2) $$ where $u_i$ is the difference between the overall mean and subject $i$'s mean measurement, and $\varepsilon_{ij}$ is the measurement error for subject $i$ on occasion $j$. Here, this is a random-effect model. Unlike a standard ANOVA with subjects as factor, we consider the $u_i$ as random (i.i.d.) effects, $u_i\sim\mathcal{N}(0,\tau^2)$, independent of the error terms. Each measurement differ from the overall mean $\mu$ by the sum of the two error terms, among which the $u_i$ is shared between occasion on the same subjects. The total variance is then $\tau^2+\sigma^2$ and the proportion of the total variance that is accounted for by the subjects is $$ \rho=\frac{\tau^2}{\tau^2+\sigma^2} $$ which is the ICC, or the reliability index from a psychometrical point of view. Note that this reliability is sample-dependent (as it depends on the between-subject variance). Instead of the mixed-effects model, we could derive the same results from a two-way ANOVA (subjects + time, as factors) and the corresponding Mean Squares. You will find additional references in those related questions: Repeatability and measurement error from and between observers, and Inter-rater reliability for ordinal or interval data. In R, you can use the icc() function from the psy package; the random intercept model described above corresponds to the "agreement" ICC, while incorporating the time effect as a fixed factor would yield the "consistency" ICC. You can also use the lmer() function from the lme4 package, or the lme() function from the nlme package. The latter has the advantage that you can easily obtain 95% CIs for the variance components (using the intervals() function). Dave Garson provided a nice overview (with SPSS illustrations) in Reliability Analysis, and Estimating Multilevel Models using SPSS, Stata, SAS, and R constitutes a useful tutorial, with applications in educational assessment. But the definitive reference is Shrout and Fleiss (1979), Intraclass Correlations: Uses in Assessing Rater Reliability, Psychological Bulletin, 86(2), 420-428. I have also added an example R script on Githhub, that includes the ANOVA and mixed-effect approaches. Also, should you add a constant value to all of the values taken at the second occasion, the Pearson correlation would remain identical (because it is based on deviations of the 1st and 2nd measurements from their respective means), whereas the reliability as computed through the random intercept model (or the agreement ICC) would decrease. BTW, Cronbach's alpha is not very helpful in this case because it is merely a measure of the internal consistency (yet, another form of "reliability") of an unidimensional scale; it would have no meaning should it be computed on items underlying different constructs. Even if your questions survey a single domain, it's hard to imagine mixing the two series of measurements, and Cronbach's alpha should be computed on each set separately. Its associated 95% confidence interval (computed by bootstrap) should give an indication about the stability of the internal structure between the two test occasions. As an example of applied work with ICC, I would suggest Johnson, SR, Tomlinson, GA, Hawker, GA, Granton, JT, Grosbein, HA, and Feldman, BM (2010). A valid and reliable belief elicitation method for Bayesian priors. Journal of Clinical Epidemiology, 63(4), 370-383.
I've got a linear regression model with the sample and variable observations and I want to know: Whether a specific variable is significant enough to remain included in the model. Whether another variable (with observations) ought to be included in the model. Which statistics can help me out? How can get them most efficiently?
[ "https://stats.stackexchange.com/questions/1016", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/614/" ]
6
HuggingFaceH4/stack-exchange-preferences
Statistical significance is not usually a good basis for determining whether a variable should be included in a model. Statistical tests were designed to test hypotheses, not select variables. I know a lot of textbooks discuss variable selection using statistical tests, but this is generally a bad approach. See Harrell's book Regression Modelling Strategies for some of the reasons why. These days, variable selection based on the AIC (or something similar) is usually preferred.
Introductory, advanced, and even obscure, please. Mostly to test myself. I like to make sure I know what the heck I'm talking about :) Thanks
[ "https://stats.stackexchange.com/questions/1023", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/74/" ]
4
HuggingFaceH4/stack-exchange-preferences
I wrote a post compiling links of Practice Questions for Statistics in Psychology (Undergraduate Level). http://jeromyanglim.blogspot.com/2009/12/practice-questions-for-statistics-in.html The questions would fall into the introductory category.
I am comparing two distributions with KL divergence which returns me a non-standardized number that, according to what I read about this measure, is the amount of information that is required to transform one hypothesis into the other. I have two questions: a) Is there a way to quantify a KL divergence so that it has a more meaningful interpretation, e.g. like an effect size or a R^2? Any form of standardization? b) In R, when using KLdiv (flexmix package) one can set the 'esp' value (standard esp=1e-4) that sets all points smaller than esp to some standard in order to provide numerical stability. I have been playing with different esp values and, for my data set, I am getting an increasingly larger KL divergence the smaller a number I pick. What is going on? I would expect that the smaller the esp, the more reliable the results should be since they let more 'real values' become part of the statistic. No? I have to change the esp since it otherwise does not calculate the statistic but simply shows up as NA in the result table ...
[ "https://stats.stackexchange.com/questions/1028", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/608/" ]
4
HuggingFaceH4/stack-exchange-preferences
Suppose you are given n IID samples generated by either p or by q. You want to identify which distribution generated them. Take as null hypothesis that they were generated by q. Let a indicate probability of Type I error, mistakenly rejecting the null hypothesis, and b indicate probability of Type II error. Then for large n, probability of Type I error is at least $\exp(-n \text{KL}(p,q))$ In other words, for an "optimal" decision procedure, probability of Type I falls at most by a factor of exp(KL(p,q)) with each datapoint. Type II error falls by factor of $\exp(\text{KL}(q,p))$ at most. For arbitrary n, a and b are related as follows $b \log \frac{b}{1-a}+(1-b)\log \frac{1-b}{a} \le n \text{KL}(p,q)$ and $a \log \frac{a}{1-b}+(1-a)\log \frac{1-a}{b} \le n \text{KL}(q,p)$ If we express the bound above as the lower bound on a in terms of b and KL and decrease b to 0, result seems to approach the "exp(-n KL(q,p))" bound even for small n More details on page 10 here, and pages 74-77 of Kullback's "Information Theory and Statistics" (1978). As a side note, this interpretation can be used to motivate Fisher Information metric, since for any pair of distributions p,q at Fisher's distance k from each other (small k) you need the same number of observations to to tell them apart
I'm comparing a sample and checking whether it distributes as some, discrete, distribution. However, I'm not enterily sure that Kolmogorov-Smirnov applies. Wikipedia seems to imply it does not. If it does not, how can I test the sample's distribution?
[ "https://stats.stackexchange.com/questions/1047", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/614/" ]
5
HuggingFaceH4/stack-exchange-preferences
It does not apply to discrete distributions. See http://www.itl.nist.gov/div898/handbook/eda/section3/eda35g.htm for example. Is there any reason you can't use a chi-square goodness of fit test? see http://www.itl.nist.gov/div898/handbook/eda/section3/eda35f.htm for more info.
my question particularly applies to network reconstruction
[ "https://stats.stackexchange.com/questions/1052", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ]
5
HuggingFaceH4/stack-exchange-preferences
Correlation measures the linear relationship (Pearson's correlation) or monotonic relationship (Spearman's correlation) between two variables, X and Y. Mutual information is more general and measures the reduction of uncertainty in Y after observing X. It is the KL distance between the joint density and the product of the individual densities. So MI can measure non-monotonic relationships and other more complicated relationships.
I am looking for a good book/tutorial to learn about survival analysis. I am also interested in references on doing survival analysis in R.
[ "https://stats.stackexchange.com/questions/1053", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/172/" ]
5
HuggingFaceH4/stack-exchange-preferences
I like: Survival Analysis: Techniques for Censored and Truncated Data (Klein & Moeschberger) Modeling Survival Data: Extending the Cox Model (Therneau) The first does a good job of straddling theory and model building issues. It's mostly focused on semi-parametric techniques, but there is reasonable coverage of parametric methods. It doesn't really provide any R or other code examples, if that's what you're after. The second is heavy with modeling on the Cox PH side (as the title might indicate). It's by the author of the survival package in R and there are plenty of R examples and mini-case studies. I think both books complement each other, but I'd recommend the first for getting started. A quick way to get started in R is David Diez's guide.
What is the equivalent command in R for the stcox command in Stata?
[ "https://stats.stackexchange.com/questions/1054", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/172/" ]
4
HuggingFaceH4/stack-exchange-preferences
In package survival, it's coxph. John Fox has a nice introduction to using coxph in R: Cox Proportional-Hazards Regression for Survival Data
My stats has been self taught, but a lot of material I read point to a dataset having mean 0 and standard deviation of 1. If that is the case then: Why is mean 0 and SD 1 a nice property to have? Why does a random variable drawn from this sample equal 0.5? The chance of drawing 0.001 is the same as 0.5 so this should be flat distribution... When people talk about Z Scores what do they actually mean here?
[ "https://stats.stackexchange.com/questions/1063", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/353/" ]
5
HuggingFaceH4/stack-exchange-preferences
At the beginning the most useful answer is probably that mean of 0 and sd of 1 are mathematically convenient. If you can work out the probabilities for a distribution with a mean of 0 and standard deviation of 1 you can work them out for any similar distribution of scores with a very simple equation. I'm not following this question. The mean of 0 and standard deviation of 1 usually applies to the standard normal distribution, often called the bell curve. The most likely value is the mean and it falls off as you get farther away. If you have a truly flat distribution then there is no value more likely than another. Your question here is poorly formed. Were you looking at questions about coin flips perhaps? Look up binomial distribution and central limit theorem. "mean here"? Where? The simple answer for z-scores is that they are your scores scaled as if your mean were 0 and standard deviation were 1. Another way of thinking about it is that it takes an individual score as the number of standard deviations that score is from the mean. The equation is calculating the (score - mean) / standard deviation. The reasons you'd do that are quite varied but one is that in intro statistics courses you have tables of probabilities for different z-scores (see answer 1). If you looked up z-score first, even in wikipedia, you would have gotten pretty good answers.
I want to represent a variable as a number between 0 and 1. The variable is a non-negative integer with no inherent bound. I map 0 to 0 but what can I map to 1 or numbers between 0 and 1? I could use the history of that variable to provide the limits. This would mean I have to restate old statistics if the maximum increases. Do I have to do this or are there other tricks I should know about?
[ "https://stats.stackexchange.com/questions/1112", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/652/" ]
6
HuggingFaceH4/stack-exchange-preferences
A very common trick to do so (e.g., in connectionist modeling) is to use the hyperbolic tangent tanh as the 'squashing function". It automatically fits all numbers into the interval between -1 and 1. Which in your case restricts the range from 0 to 1. In r and matlab you get it via tanh(). Another squashing function is the logistic function (thanks to Simon for the name), provided by $ f(x) = 1 / (1 + e ^{-x} ) $, which restricts the range from 0 to 1 (with 0 mapped to .5). So you would have to multiply the result by 2 and subtract 1 to fit your data into the interval between 0 and 1. Here is some simple R code which plots both functions (tanh in red, logistic in blue) so you can see how both squash: x <- seq(0,20,0.001) plot(x,tanh(x),pch=".", col="red", ylab="y") points(x,(1 / (1 + exp(-x)))*2-1, pch=".",col="blue")
I have cross classified data in a 2 x 2 x 6 table. Let's call the dimensions response, A and B. I fit a logistic regression to the data with the model response ~ A * B. An analysis of deviance of that model says that both terms and their interaction are significant. However, looking at the proportions of the data, it looks like only 2 or so levels of B are responsible for these significant effects. I would like to test to see which levels are the culprits. Right now, my approach is to perform 6 chi-squared tests on 2 x 2 tables of response ~ A, and then to adjust the p-values from those tests for multiple comparisons (using the Holm adjustment). My question is whether there is a better approach to this problem. Is there a more principled modeling approach, or multiple chi-squared test comparison approach?
[ "https://stats.stackexchange.com/questions/1133", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/287/" ]
5
HuggingFaceH4/stack-exchange-preferences
You should look into "partitioning chi-squared". This is similar in logic to performing post-hoc tests in ANOVA. It will allow you to determine whether your significant overall test is primarily attributable to differences in particular categories or groups of categories. A quick google turned up this presentation, which at the end discusses methods for partitioning chi-squared. http://www.ed.uiuc.edu/courses/EdPsy490AT/lectures/2way_chi-ha-online.pdf
I am working with a large amount of time series. These time series are basically network measurements coming every 10 minutes, and some of them are periodic (i.e. the bandwidth), while some other aren't (i.e. the amount of routing traffic). I would like a simple algorithm for doing an online "outlier detection". Basically, I want to keep in memory (or on disk) the whole historical data for each time series, and I want to detect any outlier in a live scenario (each time a new sample is captured). What is the best way to achieve these results? I'm currently using a moving average in order to remove some noise, but then what next? Simple things like standard deviation, mad, ... against the whole data set doesn't work well (I can't assume the time series are stationary), and I would like something more "accurate", ideally a black box like: double outlier_detection(double* vector, double value); where vector is the array of double containing the historical data, and the return value is the anomaly score for the new sample "value" .
[ "https://stats.stackexchange.com/questions/1142", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/667/" ]
7
HuggingFaceH4/stack-exchange-preferences
Here is a simple R function that will find time series outliers (and optionally show them in a plot). It will handle seasonal and non-seasonal time series. The basic idea is to find robust estimates of the trend and seasonal components and subtract them. Then find outliers in the residuals. The test for residual outliers is the same as for the standard boxplot -- points greater than 1.5IQR above or below the upper and lower quartiles are assumed outliers. The number of IQRs above/below these thresholds is returned as an outlier "score". So the score can be any positive number, and will be zero for non-outliers. I realise you are not implementing this in R, but I often find an R function a good place to start. Then the task is to translate this into whatever language is required. tsoutliers <- function(x,plot=FALSE) { x <- as.ts(x) if(frequency(x)>1) resid <- stl(x,s.window="periodic",robust=TRUE)$time.series[,3] else { tt <- 1:length(x) resid <- residuals(loess(x ~ tt)) } resid.q <- quantile(resid,prob=c(0.25,0.75)) iqr <- diff(resid.q) limits <- resid.q + 1.5*iqr*c(-1,1) score <- abs(pmin((resid-limits[1])/iqr,0) + pmax((resid - limits[2])/iqr,0)) if(plot) { plot(x) x2 <- ts(rep(NA,length(x))) x2[score>0] <- x[score>0] tsp(x2) <- tsp(x) points(x2,pch=19,col="red") return(invisible(score)) } else return(score) }
The wiki discusses the problems that arise when multicollinearity is an issue in linear regression. The basic problem is multicollinearity results in unstable parameter estimates which makes it very difficult to assess the effect of independent variables on dependent variables. I understand the technical reasons behind the problems (may not be able to invert $X' X$, ill-conditioned $X' X$ etc) but I am searching for a more intuitive (perhaps geometric?) explanation for this issue. Is there a geometric or perhaps some other form of easily understandable explanation as to why multicollinearity is problematic in the context of linear regression?
[ "https://stats.stackexchange.com/questions/1149", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ]
8
HuggingFaceH4/stack-exchange-preferences
Consider the simplest case where $Y$ is regressed against $X$ and $Z$ and where $X$ and $Z$ are highly positively correlated. Then the effect of $X$ on $Y$ is hard to distinguish from the effect of $Z$ on $Y$ because any increase in $X$ tends to be associated with an increase in $Z$. Another way to look at this is to consider the equation. If we write $Y = b_0 + b_1X + b_2Z + e$, then the coefficient $b_1$ is the increase in $Y$ for every unit increase in $X$ while holding $Z$ constant. But in practice, it is often impossible to hold $Z$ constant and the positive correlation between $X$ and $Z$ mean that a unit increase in $X$ is usually accompanied by some increase in $Z$ at the same time. A similar but more complicated explanation holds for other forms of multicollinearity.
When solving business problems using data, it's common that at least one key assumption that under-pins classical statistics is invalid. Most of the time, no one bothers to check those assumptions so you never actually know. For instance, that so many of the common web metrics are "long-tailed" (relative to the normal distribution) is, by now, so well documented that we take it for granted. Another example, online communities--even in communities with thousands of members, it's well-documented that by far the largest share of contribution to/participation in many of these community is attributable to a minuscule group of 'super-contributors.' (E.g., a few months ago, just after the SO API was made available in beta, a StackOverflow member published a brief analysis from data he collected through the API; his conclusion--less than one percent of the SO members account for most of the activity on SO (presumably asking questions, and answering them), another 1-2% accounted for the rest, and the overwhelming majority of the members do nothing). Distributions of that sort--again more often the rule rather than the exception--are often best modeled with a power law density function. For these type of distributions, even the central limit theorem is problematic to apply. So given the abundance of populations like this of interest to analysts, and given that classical models perform demonstrably poorly on these data, and given that robust and resistant methods have been around for a while (at least 20 years, I believe)--why are they not used more often? (I am also wondering why I don't use them more often, but that's not really a question for CrossValidated.) Yes I know that there are textbook chapters devoted entirely to robust statistics and I know there are (a few) R Packages (robustbase is the one I am familiar with and use), etc. And yet given the obvious advantages of these techniques, they are often clearly the better tools for the job--why are they not used much more often? Shouldn't we expect to see robust (and resistant) statistics used far more often (perhaps even presumptively) compared with the classical analogs? The only substantive (i.e., technical) explanation I have heard is that robust techniques (likewise for resistant methods) lack the power/sensitivity of classical techniques. I don't know if this is indeed true in some cases, but I do know it is not true in many cases. A final word of preemption: yes I know this question does not have a single demonstrably correct answer; very few questions on this Site do. Moreover, this question is a genuine inquiry; it's not a pretext to advance a point of view--I don't have a point of view here, just a question for which i am hoping for some insightful answers.
[ "https://stats.stackexchange.com/questions/1164", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/438/" ]
7
HuggingFaceH4/stack-exchange-preferences
Researchers want small p-values, and you can get smaller p-values if you use methods that make stronger distributional assumptions. In other words, non-robust methods let you publish more papers. Of course more of these papers may be false positives, but a publication is a publication. That's a cynical explanation, but it's sometimes valid.
In my area of research, a popular way of displaying data is to use a combination of a bar chart with "handle-bars". For example, The "handle-bars" alternate between standard errors and standard deviations depending on the author. Typically, the sample sizes for each "bar" are fairly small - around six. These plots seem to be particularly popular in biological sciences - see the first few papers of BMC Biology, vol 3 for examples. So how would you present this data? Why I dislike these plots Personally I don't like these plots. When the sample size is small, why not just display the individual data points. Is it the sd or the se that is being displayed? No-one agrees which to use. Why use bars at all. The data doesn't (usually) go from 0 but a first pass at the graph suggests it does. The graphs don't give an idea about range or sample size of the data. R script This is the R code I used to generate the plot. That way you can (if you want) use the same data. #Generate the data set.seed(1) names = c("A1", "A2", "A3", "B1", "B2", "B3", "C1", "C2", "C3") prevs = c(38, 37, 31, 31, 29, 26, 40, 32, 39) n=6; se = numeric(length(prevs)) for(i in 1:length(prevs)) se[i] = sd(rnorm(n, prevs, 15))/n #Basic plot par(fin=c(6,6), pin=c(6,6), mai=c(0.8,1.0,0.0,0.125), cex.axis=0.8) barplot(prevs,space=c(0,0,0,3,0,0, 3,0,0), names.arg=NULL, horiz=FALSE, axes=FALSE, ylab="Percent", col=c(2,3,4), width=5, ylim=range(0,50)) #Add in the CIs xx = c(2.5, 7.5, 12.5, 32.5, 37.5, 42.5, 62.5, 67.5, 72.5) for (i in 1:length(prevs)) { lines(rep(xx[i], 2), c(prevs[i], prevs[i]+se[i])) lines(c(xx[i]+1/2, xx[i]-1/2), rep(prevs[i]+se[i], 2)) } #Add the axis axis(2, tick=TRUE, xaxp=c(0, 50, 5)) axis(1, at=xx+0.1, labels=names, font=1, tck=0, tcl=0, las=1, padj=0, col=0, cex=0.1)
[ "https://stats.stackexchange.com/questions/1173", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/8/" ]
5
HuggingFaceH4/stack-exchange-preferences
Thanks for all you answers. For completeness I thought I should include what I usually do. I tend to do a combination of the suggestions given: dots, boxplots (when n is large), and se (or sd) ranges. (Removed by moderator because the site hosting the image no longer appears to work correctly.) From the dot plot, it is clear that data is far more spread out the "handle bar" plots suggest. In fact, there is a negative value in A3! I've made this answer a CW so I don't gain rep
I know of normality tests, but how do I test for "Poisson-ness"? I have sample of ~1000 non-negative integers, which I suspect are taken from a Poisson distribution, and I would like to test that.
[ "https://stats.stackexchange.com/questions/1174", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/634/" ]
6
HuggingFaceH4/stack-exchange-preferences
First of all my advice is you must refrain from trying out a Poisson distribution just as it is to the data. I suggest you must first make a theory as to why should Poisson distribution fit a particular dataset or a phenomenon. Once you have established this, the next question is whether the distribution is homogeneous or not. This means whether all parts of the data are handled by the same poisson distribution or is there a variation in this based on some aspect like time or space. Once you have convinced of these aspects, try the following three tests: likelihood ratio test using a chi square variable use of conditional chi-square statistic; also called poisson dispersion test or variance test use of the neyman-scott statistic, that is based on a variance stabilizing transformation of the poisson variable search for these and you will find them easily on the net.
Back in April, I attended a talk at the UMD Math Department Statistics group seminar series called "To Explain or To Predict?". The talk was given by Prof. Galit Shmueli who teaches at UMD's Smith Business School. Her talk was based on research she did for a paper titled "Predictive vs. Explanatory Modeling in IS Research", and a follow up working paper titled "To Explain or To Predict?". Dr. Shmueli's argument is that the terms predictive and explanatory in a statistical modeling context have become conflated, and that statistical literature lacks a a thorough discussion of the differences. In the paper, she contrasts both and talks about their practical implications. I encourage you to read the papers. The questions I'd like to pose to the practitioner community are: How do you define a predictive exercise vs an explanatory/descriptive one? It would be useful if you could talk about the specific application. Have you ever fallen into the trap of using one when meaning to use the other? I certainly have. How do you know which one to use?
[ "https://stats.stackexchange.com/questions/1194", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/11/" ]
6
HuggingFaceH4/stack-exchange-preferences
In one sentence Predictive modelling is all about "what is likely to happen?", whereas explanatory modelling is all about "what can we do about it?" In many sentences I think the main difference is what is intended to be done with the analysis. I would suggest explanation is much more important for intervention than prediction. If you want to do something to alter an outcome, then you had best be looking to explain why it is the way it is. Explanatory modelling, if done well, will tell you how to intervene (which input should be adjusted). However, if you simply want to understand what the future will be like, without any intention (or ability) to intervene, then predictive modelling is more likely to be appropriate. As an incredibly loose example, using "cancer data". Predictive modelling using "cancer data" would be appropriate (or at least useful) if you were funding the cancer wards of different hospitals. You don't really need to explain why people get cancer, rather you only need an accurate estimate of how much services will be required. Explanatory modelling probably wouldn't help much here. For example, knowing that smoking leads to higher risk of cancer doesn't on its own tell you whether to give more funding to ward A or ward B. Explanatory modelling of "cancer data" would be appropriate if you wanted to decrease the national cancer rate - predictive modelling would be fairly obsolete here. The ability to accurately predict cancer rates is hardly likely to help you decide how to reduce it. However, knowing that smoking leads to higher risk of cancer is valuable information - because if you decrease smoking rates (e.g. by making cigarettes more expensive), this leads to more people with less risk, which (hopefully) leads to an expected decrease in cancer rates. Looking at the problem this way, I would think that explanatory modelling would mainly focus on variables which are in control of the user, either directly or indirectly. There may be a need to collect other variables, but if you can't change any of the variables in the analysis, then I doubt that explanatory modelling will be useful, except maybe to give you the desire to gain control or influence over those variables which are important. Predictive modelling, crudely, just looks for associations between variables, whether controlled by the user or not. You only need to know the inputs/features/independent variables/etc.. to make a prediction, but you need to be able to modify or influence the inputs/features/independent variables/etc.. in order to intervene and change an outcome.
This post is the continuation of another post related to a generic method for outlier detection in time series. Basically, at this point I'm interested in a robust way to discover the periodicity/seasonality of a generic time series affected by a lot of noise. From a developer point of view, I would like a simple interface such as: unsigned int discover_period(vector<double> v); Where v is the array containing the samples, and the return value is the period of the signal. The main point is that, again, I can't make any assumption regarding the analyzed signal. I already tried an approach based on the signal autocorrelation (detecting the peaks of a correlogram), but it's not robust as I would like.
[ "https://stats.stackexchange.com/questions/1207", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/667/" ]
6
HuggingFaceH4/stack-exchange-preferences
If you really have no idea what the periodicity is, probably the best approach is to find the frequency corresponding to the maximum of the spectral density. However, the spectrum at low frequencies will be affected by trend, so you need to detrend the series first. The following R function should do the job for most series. It is far from perfect, but I've tested it on a few dozen examples and it seems to work ok. It will return 1 for data that have no strong periodicity, and the length of period otherwise. Update: Version 2 of function. This is much faster and seems to be more robust. find.freq <- function(x) { n <- length(x) spec <- spec.ar(c(x),plot=FALSE) if(max(spec$spec)>10) # Arbitrary threshold chosen by trial and error. { period <- round(1/spec$freq[which.max(spec$spec)]) if(period==Inf) # Find next local maximum { j <- which(diff(spec$spec)>0) if(length(j)>0) { nextmax <- j[1] + which.max(spec$spec[j[1]:500]) period <- round(1/spec$freq[nextmax]) } else period <- 1 } } else period <- 1 return(period) }
I'm looking for some robust techniques to remove outliers and errors (whatever the cause) from financial time-series data (i.e. tickdata). Tick-by-tick financial time-series data is very messy. It contains huge (time) gaps when the exchange is closed, and make huge jumps when the exchange opens again. When the exchange is open, all kinds of factors introduce trades at price levels that are wrong (they did not occur) and/or not representative of the market (a spike because of an incorrectly entered bid or ask price for example). This paper by tickdata.com (PDF) does a good job of outlining the problem, but offers few concrete solutions. Most papers I can find online that mention this problem either ignore it (the tickdata is assumed filtered) or include the filtering as part of some huge trading model which hides any useful filtering steps. Is anybody aware of more in-depth work in this area? Update: this questions seems similar on the surface but: Financial time series is (at least at the tick level) non-periodic. The opening effect is a big issue because you can't simply use the last day's data as initialisation even though you'd really like to (because otherwise you have nothing). External events might cause the new day's opening to differ dramatically both in absolute level, and in volatility from the previous day. Wildly irregular frequency of incoming data. Near open and close of the day the amount of datapoints/second can be 10 times higher than the average during the day. The other question deals with regularly sampled data. The "outliers" in financial data exhibit some specific patterns that could be detected with specific techniques not applicable in other domains and I'm -in part- looking for those specific techniques. In more extreme cases (e.g. the flash crash) the outliers might amount to more than 75% of the data over longer intervals (> 10 minutes). In addition, the (high) frequency of incoming data contains some information about the outlier aspect of the situation.
[ "https://stats.stackexchange.com/questions/1223", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/127/" ]
4
HuggingFaceH4/stack-exchange-preferences
The problem is definitely hard. Mechanical rules like the +/- N1 times standard deviations, or +/ N2 times MAD, or +/- N3 IQR or ... will fail because there are always some series that are different as for example: fixings like interbank rate may be constant for some time and then jump all of a sudden similarly for e.g. certain foreign exchanges coming off a peg certain instrument are implicitly spreads; these may be near zero for periods and all of a sudden jump manifold Been there, done that, ... in a previous job. You could try to bracket each series using arbitrage relations ships (e.g. assuming USD/EUR and EUR/JPY are presumed good, you can work out bands around what USD/JPY should be; likewise for derivatives off an underlying etc pp. Commercial data vendors expand some effort on this, and those of use who are clients of theirs know ... it still does not exclude errors.
I am using a control chart to try to work on some infection data, and will raise an alert if the infection is considered "out of control". Problems arrive when I come to a set of data where most of the time points have zero infection, with only a few occasions of one to two infections, but these already exceed the control limit of the chart, and raise an alert. How should I work on the control chart if the data set is having very few positive infection counts?
[ "https://stats.stackexchange.com/questions/1228", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/588/" ]
4
HuggingFaceH4/stack-exchange-preferences
Change the variable. Run a control chart for the "time between infections" variable. That way, instead of a discrete variable with a very small range of values, you have a continuous variable with an adequate range of values. If the interval between infections gets too small, the chart will give an "out of control" indication. This procedure was recommended by Donald Wheeler in Understanding Variation: The Key to Managing Chaos.
When would you tend to use ROC curves over some other tests to determine the predictive ability of some measurement on an outcome? When dealing with discrete outcomes (alive/dead, present/absent), what makes ROC curves more or less powerful than something like a chi-square?
[ "https://stats.stackexchange.com/questions/1241", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/684/" ]
5
HuggingFaceH4/stack-exchange-preferences
The ROC function (it is not necessarily a curve) allows you to assess the discrimination ability provided by a a specific statistical model (comprised of a predictor variable or a set of them). A main consideration of ROCs is that model predictions do not only stem from the model's ability to discriminate/make predictions based on the evidence provided by predictor variables. Also operating is a response criteria that defines how much evidence is necessary for the model to predict a response, and what is the outcome of these responses. The value that is established for the response criteria will greatly influence the model predictions, and ultimately the type of mistakes that it will make. Consider a generic model with predictor variables and a response criteria. This model is trying to predict the Presence of X,by responding Yes or No. So you have the following confusion matrix: **X present X absent** **Model Predicts X Present** Hit False Alarm **Model Predicts X Absent** Miss Correct Rejection In this matrix, you only need to consider the proportion of Hits and the False Alarms (because the others can be derived from these, given that they have to some to 1). For each response criteria, you wil ave a different confusion matrix. The errors (Misses and False Alarms) are negatively related, which means that a response criteria that minimizes false alarms maximizes misses and vice-versa. The message is: there is no free lunch. So, in order to understand how well the model discriminates cases/makes predictions, independently of the response criteria established, you plot the Hits and False rates produced across the range of possible response criteria. What you get from this plot is the ROC function. The area under the function provides an unbiased, and non-parametric measure of the discrimination ability of the model. This measure is very important because it is free of any confounds that could have been produced by the response criteria. A second important aspect, is that by analyzing the function, one can define what response criteria is better for your objectives. What types of errors you want to avoid, and what are errors are OK. For instance, consider an HIV test: it is a test that looks up some sort of evidence (in this case antibodies) and makes a discrimination/prediction based on the comparison of the evidence with response criterion. This response criterion is usually set very low, so that you minimize Misses. Of course this will result in more False Alarms, which have a cost, but a cost that is negligible when compared to the Misses. With ROCs you can assess some model's discrimination ability, independently of the response criteria, and also establish the optimal response criteria, given the needs and constraints of whatever that you are measuring. Tests like hi-square cannot help at all in this because even if your testing if the predictions are at chance level, many different Hit-False Alarm pairs are consistent with chance level. Some frameworks, like signal detection theory, assume a priori that the evidence available for discrimination has specific distribuiton (e.g., normal distribution, or gamma distribution). When these assumptions hold (or are pretty close), some really nice measures are available that make your life easier. hope this helps to elucidate you on the advantages of ROCs
I was wondering if there is a statistical model "cheat sheet(s)" that lists any or more information: when to use the model when not to use the model required and optional inputs expected outputs has the model been tested in different fields (policy, bio, engineering, manufacturing, etc)? is it accepted in practice or research? expected variation / accuracy / precision caveats scalability deprecated model, avoid or don't use etc .. I've seen hierarchies before on various websites, and some simplistic model cheat sheets in various textbooks; however, it'll be nice if there is a larger one that encompasses various types of models based on different types of analysis and theories.
[ "https://stats.stackexchange.com/questions/1252", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/59/" ]
6
HuggingFaceH4/stack-exchange-preferences
I have previously found UCLA's "Choosing the Correct Statistical Test" to be helpful: https://stats.idre.ucla.edu/other/mult-pkg/whatstat/ It also gives examples of how to do the analysis in SAS, Stata, SPSS and R.
Suppose I have a table of counts that look like this A B C Success 1261 230 3514 Failure 381 161 4012 I have a hypothesis that there is some probability $p$ such that $P(Success_A) = p^i$, $P(Success_B) = p^j$ and $P(Success_C) = p^k$. Is there some way to produce estimates for $p$, $i$, $j$ and $k$? The idea I have is to iteratively try values for $p$ between 0 and 1, and values for $i$, $j$ and $k$ between 1 and 5. Given the column totals, I could produce expected values, then calculate $\chi^2$ or $G^2$. This would produce a best fit, but it wouldn't give any confidence interval for any of the values. It's also not particularly computationally efficient. As a side question, if I wanted to test the goodness of fit of a particular set of values for $i$, $j$ and $k$ (specifically 1, 2, and 3), once I've calculated $\chi^2$ or $G^2$, I'd want to calculate significance on the $\chi^2$ distribution with 1 degree of freedom, correct? This isn't a normal contingency table since relationship of each column to the others is fixed to a single value. Given $p$, $i$, $j$ and $k$, filling in a single value in a cell fixes what the values of the other cells must be.
[ "https://stats.stackexchange.com/questions/1261", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/287/" ]
4
HuggingFaceH4/stack-exchange-preferences
Following up on my comment, this question would be very simple if i, j, and k were not restricted to be integers. The reason is as follows: pA, pB, and pC denote the observed probability of success in the three groups. Then let p=pA, i=1, j=log(pB)/log(pA), and k=log(pC)/log(pA). These will easily satisfy the required conditions (except for j and k being between 1 and 5, but that looks like an ad-hoc simplifying assumption instead of a real constraint). In fact, if you do this with the given data, you get j=2.009 and k=2.884 which I think prompted the original question. It is even possible to get standard errors for these quantities (or rather their logarithm). Note that if pB = p^j, then log(-log(pB)) = log(j) + log(-log(p)), so one can use logistic regression with a complimentary log-log link for the number of failures (the complimentary log-log function is log(-log(1-x)) and this link is built in for most statitical software such as R or SAS). Then one could check whether the 95% CIs include integers, or perhaps run a likelihood-ratio (or other) test comparing the fit of the unrestricted model to one where j and k are rounded to the nearest integer. The above assumes that i=1. Something similar could probably be done for other integer i's (probably by having an offset of log(i) in the model - I have not thought it through). In the end, I want to note that you should make sure that your hypothesis is meaningful by itself, and did not come from playing with the data. Otherwise any statistical test is biased because you picked a form of the null hypothesis (out all the possible weird forms that you could have imagined) that is likely to fit.
The following question is one of those holy grails for me for some time now, I hope someone might be able to offer a good advice. I wish to perform a non-parametric repeated measures multiway anova using R. I have been doing some online searching and reading for some time, and so far was able to find solutions for only some of the cases: friedman test for one way nonparametric repeated measures anova, ordinal regression with {car} Anova function for multi way nonparametric anova, and so on. The partial solutions is NOT what I am looking for in this question thread. I have summarized my findings so far in a post I published some time ago (titled: Repeated measures ANOVA with R (functions and tutorials), in case it would help anyone) If what I read online is true, this task might be achieved using a mixed Ordinal Regression model (a.k.a: Proportional Odds Model). I found two packages that seems relevant, but couldn't find any vignette on the subject: http://cran.r-project.org/web/packages/repolr/ http://cran.r-project.org/web/packages/ordinal/ So being new to the subject matter, I was hoping for some directions from people here. Are there any tutorials/suggested-reading on the subject? Even better, can someone suggest a simple example code for how to run and analyse this in R (e.g: "non-parametric repeated measures multiway anova") ?
[ "https://stats.stackexchange.com/questions/1266", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/253/" ]
4
HuggingFaceH4/stack-exchange-preferences
The ez package, of which I am the author, has a function called ezPerm() which computes a permutation test, but probably doesn't do interactions properly (the documentation admits as much). The latest version has a function called ezBoot(), which lets you do bootstrap resampling that takes into account repeated measures (by resampling subjects, then within subjects), either using traditional cell means as the prediction statistic or using mixed effects modelling to make predictions for each cell in the design. I'm still not sure how "non-parametric" the bootstrap CIs from mixed effects model predictions are; my intuition is that they might reasonably be considered non-parametric, but my confidence in this area is low given that I'm still learning about mixed effects models.
I am using Singular Value Decomposition as a dimensionality reduction technique. Given N vectors of dimension D, the idea is to represent the features in a transformed space of uncorrelated dimensions, which condenses most of the information of the data in the eigenvectors of this space in a decreasing order of importance. Now I am trying to apply this procedure to time series data. The problem is that not all the sequences have the same length, thus I cant really build the num-by-dim matrix and apply SVD. My first thought was to pad the matrix with zeros by building a num-by-maxDim matrix and filling the empty spaces with zeros, but I'm not so sure if that is the correct way. My question is how do you the SVD approach of dimensionality reduction to time series of different length? Alternatively are there any other similar methods of eigenspace representation usually used with time series? Below is a piece of MATLAB code to illustrate the idea: X = randn(100,4); % data matrix of size N-by-dim X0 = bsxfun(@minus, X, mean(X)); % standarize [U S V] = svd(X0,0); % SVD variances = diag(S).^2 / (size(X,1)-1); % variances along eigenvectors KEEP = 2; % number of dimensions to keep newX = U(:,1:KEEP)*S(1:KEEP,1:KEEP); % reduced and transformed data (I am coding mostly in MATLAB, but I'm comfortable enough to read R/Python/.. as well)
[ "https://stats.stackexchange.com/questions/1268", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/170/" ]
4
HuggingFaceH4/stack-exchange-preferences
There is a reasonably new area of research called Matrix Completion, that probably does what you want. A really nice introduction is given in this lecture by Emmanuel Candes
Sometimes I want to do an exact test by examining all possible combinations of the data to build an empirical distribution against which I can test my observed differences between means. To find the possible combinations I'd typically use the combn function. The choose function can show me how many possible combinations there are. It is very easy for the number of combinations to get so large that it is not possible to store the result of the combn function, e.g. combn(28,14) requires a 2.1 Gb vector. So I tried writing an object that steped through the same logic as the combn function in order to provide the values off an imaginary "stack" one at a time. However, this method (as I instantiated it) is easily 50 times slower than combn at reasonable combination sizes, leading me to think it will also be painfully slow for larger combination sizes. Is there a better algorithm for doing this sort of thing than the algorithm used in combn?Specifically is there a way to generate and pull the Nth possible combination without calculating through all previous combinations?
[ "https://stats.stackexchange.com/questions/1286", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/196/" ]
4
HuggingFaceH4/stack-exchange-preferences
If you wish to trade processing speed for memory (which I think you do), I would suggest the following algorithm: Set up a loop from 1 to N Choose K, indexed by i Each i can be considered an index to a combinadic, decode as such Use the combination to perform your test statistic, store the result, discard the combination Repeat This will give you all N Choose K possible combinations without having to create them explicitly. I have code to do this in R if you'd like it (you can email me at mark dot m period fredrickson at-symbol gmail dot com).
I am having difficulties to select the right way to visualize data. Let's say we have bookstores that sells books, and every book has at least one category. For a bookstore, if we count all the categories of books, we acquire a histogram that shows the number of books that falls into a specific category for that bookstore. I want to visualize the bookstore behavior, I want to see if they favor a category over other categories. I don't want to see if they are favoring sci-fi all together, but I want to see if they are treating every category equally or not. I have ~1M bookstores. I have thought of 4 methods: Sample the data, show only 500 bookstore's histograms. Show them in 5 separate pages using 10x10 grid. Example of a 4x4 grid: Same as #1. But this time sort x axis values according to their count desc, so if there is a favoring it will be seen easily. Imagine putting the histograms in #2 together like a deck and showing them in 3D. Something like this: Instead of using third axis suing color to represent colors, so using a heatmap (2D histogram): If generally bookstores prefer some categories to others it will be displayed as a nice gradient from left to right. Do you have any other visualization ideas/tools to represent multiple histograms?
[ "https://stats.stackexchange.com/questions/1289", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/760/" ]
4
HuggingFaceH4/stack-exchange-preferences
As you have found out there are no easy answers to your question! I presume that you interested in finding strange or different book stores? If this is the case then you could try things like PCA (see the wikipedia cluster analysis page for more details). To give you an idea, consider this example. You have 26 bookshops (with names A, B,..Z). All bookshops are similar, except: Shop Z sells only a few History books. Shops O-Y sell more romance books than average. A principal components plot highlights these shops for further investigation. Here's some sample R code: > d = data.frame(Romance = rpois(26, 50), Horror = rpois(26, 100), Science = rpois(26, 75), History = rpois(26, 125)) > rownames(d) = LETTERS #Alter a few shops > d[15:25,][1] = rpois(11,150) > d[26,][4] = rpois(1, 10) #look at the data > head(d, 2) Romance Horror Science History A 36 107 62 139 B 47 93 64 118 > books.PC.cov = prcomp(d) > books.scores.cov = predict(books.PC.cov) # Plot of PC1 vs PC2 > plot(books.scores.cov[,1],books.scores.cov[,2], xlab="PC 1",ylab="PC 2", pch=NA) > text(books.scores.cov[,1],books.scores.cov[,2],labels=LETTERS) This gives the following plot: PCA plot http://img265.imageshack.us/img265/7263/tmplx.jpg Notice that: Shop z is an outlying point. The others shops form two distinct groups. Other possibilities You could also look at GGobi, I've never used it, but it looks interesting.
Decision trees seems to be a very understandable machine learning method. Once created it can be easily inspected by a human which is a great advantage in some applications. What are the practical weak sides of Decision Trees?
[ "https://stats.stackexchange.com/questions/1292", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/217/" ]
6
HuggingFaceH4/stack-exchange-preferences
Here are a couple I can think of: They can be extremely sensitive to small perturbations in the data: a slight change can result in a drastically different tree. They can easily overfit. This can be negated by validation methods and pruning, but this is a grey area. They can have problems out-of-sample prediction (this is related to them being non-smooth). Some of these are related to the problem of multicollinearity: when two variables both explain the same thing, a decision tree will greedily choose the best one, whereas many other methods will use them both. Ensemble methods such as random forests can negate this to a certain extent, but you lose the ease of understanding. However the biggest problem, from my point of view at least, is the lack of a principled probabilistic framework. Many other methods have things like confidence intervals, posterior distributions etc., which give us some idea of how good a model is. A decision tree is ultimately an ad hoc heuristic, which can still be very useful (they are excellent for finding the sources of bugs in data processing), but there is the danger of people treating the output as "the" correct model (from my experience, this happens a lot in marketing).
I have detector which will detect an event with some probability p. If the detector says that an event occured, then that is always the case, so there are not false-positives. After I run it for some time, I get k events detected. I would like to calculate what the total number of events that occured was, detected or otherwise, with some confidence, say 95%. So for example, let's say I get 13 events detected. I would like to be able to calculate that there were between 13 and 19 events with 95% confidence based on p. Here's what I've tried so far: The probability of detecting k events if there were n total is: binomial(n, k) * p^k * (1 - p)^(n - k) The sum of that over n from k to infinity is: 1/p Which means, that the probability of there being n events total is: f(n) = binomial(n, k) * p^(k + 1) * (1 - p)^(n - k) So if I want to be 95% sure I should find the first partial sum f(k) + f(k+1) + f(k+2) ... + f(k+m) which is at least 0.95 and the answer is [k, k+m]. Is this the correct approach? Also is there a closed formula for the answer?
[ "https://stats.stackexchange.com/questions/1296", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/762/" ]
5
HuggingFaceH4/stack-exchange-preferences
I would choose to use the negative binomial distribution, which returns the probability that there will be X failures before the k_th success, when the constant probability of a success is p. Using an example k=17 # number of successes p=.6 # constant probability of success the mean and sd for the failures are given by mean.X <- k*(1-p)/p sd.X <- sqrt(k*(1-p)/p^2) The distribution of the failures X, will have approximately that shape plot(dnbinom(0:(mean.X + 3 * sd.X),k,p),type='l') So, the number of failures will be (with 95% confidence) approximately between qnbinom(.025,k,p) [1] 4 and qnbinom(.975,k,p) [1] 21 So you inerval would be [k+qnbinom(.025,k,p),k+qnbinom(.975,k,p)] (using the example's numbers [21,38] )
In the traditional Birthday Paradox the question is "what are the chances that two or more people in a group of $n$ people share a birthday". I'm stuck on a problem which is an extension of this. Instead of knowing the probability that two people share a birthday, I need to extend the question to know what is the probability that $x$ or more people share a birthday. With $x=2$ you can do this by calculating the probability that no two people share a birthday and subtract that from $1$, but I don't think I can extend this logic to larger numbers of $x$. To further complicate this I also need a solution which will work for very large numbers for $n$ (millions) and $x$ (thousands).
[ "https://stats.stackexchange.com/questions/1308", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/765/" ]
4
HuggingFaceH4/stack-exchange-preferences
This is a counting problem: there are $b^n$ possible assignments of $b$ birthdays to $n$ people. Of those, let $q(k; n, b)$ be the number of assignments for which no birthday is shared by more than $k$ people but at least one birthday actually is shared by $k$ people. The probability we seek can be found by summing the $q(k;n,b)$ for appropriate values of $k$ and multiplying the result by $b^{-n}$. These counts can be found exactly for values of $n$ less than several hundred. However, they will not follow any straightforward formula: we have to consider the patterns of ways in which birthdays can be assigned. I will illustrate this in lieu of providing a general demonstration. Let $n = 4$ (this is the smallest interesting situation). The possibilities are: Each person has a unique birthday; the code is {4}. Exactly two people share a birthday; the code is {2,1}. Two people have one birthday and the other two have another; the code is {0,2}. Three people share a birthday; the code is {1,0,1}. Four people share a birthday; the code is {0,0,0,1}. Generally, the code $\{a[1], a[2], \ldots\}$ is a tuple of counts whose $k^\text{th}$ element stipulates how many distinct birthdates are shared by exactly $k$ people. Thus, in particular, $$1 a[1] + 2a[2] + ... + k a[k] + \ldots = n.$$ Note, even in this simple case, that there are two ways in which the maximum of two people per birthday is attained: one with the code $\{0,2\}$ and another with the code $\{2,1\}$. We can directly count the number of possible birthday assignments corresponding to any given code. This number is the product of three terms. One is a multinomial coefficient; it counts the number of ways of partitioning $n$ people into $a[1]$ groups of $1$, $a[2]$ groups of $2$, and so on. Because the sequence of groups does not matter, we have to divide this multinomial coefficient by $a[1]!a[2]!\cdots$; its reciprocal is the second term. Finally, line up the groups and assign them each a birthday: there are $b$ candidates for the first group, $b-1$ for the second, and so on. These values have to be multiplied together, forming the third term. It is equal to the "factorial product" $b^{(a[1]+a[2]+\cdots)}$ where $b^{(m)}$ means $b(b-1)\cdots(b-m+1)$. There is an obvious and fairly simple recursion relating the count for a pattern $\{a[1], \ldots, a[k]\}$ to the count for the pattern $\{a[1], \ldots, a[k-1]\}$. This enables rapid calculation of the counts for modest values of $n$. Specifically, $a[k]$ represents $a[k]$ birthdates shared by exactly $k$ people each. After these $a[k]$ groups of $k$ people have been drawn from the $n$ people, which can be done in $x$ distinct ways (say), it remains to count the number of ways of achieving the pattern $\{a[1], \ldots, a[k-1]\}$ among the remaining people. Multiplying this by $x$ gives the recursion. I doubt there is a closed form formula for $q(k; n, b)$, which is obtained by summing the counts for all partitions of $n$ whose maximum term equals $k$. Let me offer some examples: With $b=5$ (five possible birthdays) and $n=4$ (four people), we obtain $$\eqalign{ q(1) &= q(1;4,5) &= 120 \\ q(2) &= 360 + 60 &= 420 \\ q(3) &&= 80 \\ q(4) &&= 5.\\ }$$ Whence, for example, the chance that three or more people out of four share the same "birthday" (out of $5$ possible dates) equals $(80 + 5)/625 = 0.136$. As another example, take $b = 365$ and $n = 23$. Here are the values of $q( k;23,365)$ for the smallest $k$ (to six sig figs only): $$\eqalign{ k=1: &0.49270 \\ k=2: &0.494592 \\ k=3: &0.0125308 \\ k=4: &0.000172844 \\ k=5: &1.80449E-6 \\ k=6: &1.48722E-8 \\ k=7: &9.92255E-11 \\ k=8: &5.45195E-13. }$$ Using this technique, we can readily compute that there is about a 50% chance of (at least) a three-way birthday collision among 87 people, a 50% chance of a four-way collision among 187, and a 50% chance of a five-way collision among 310 people. That last calculation starts taking a few seconds (in Mathematica, anyway) because the number of partitions to consider starts getting large. For substantially larger $n$ we need an approximation. One approximation is obtained by means of the Poisson distribution with expectation $n/b$, because we can view a birthday assignment as arising from $b$ almost (but not quite) independent Poisson variables each with expectation $n/b$: the variable for any given possible birthday describes how many of the $n$ people have that birthday. The distribution of the maximum is therefore approximately $F(k)^b$ where $F$ is the Poisson CDF. This is not a rigorous argument, so let's do a little testing. The approximation for $n = 23$, $b = 365$ gives $$\eqalign{ k=1: &0.498783 \\ k=2: &0.496803\\ k=3: &0.014187\\ k=4: &0.000225115. }$$ By comparing with the preceding you can see that the relative probabilities can be poor when they are small, but the absolute probabilities are reasonably well approximated to about 0.5%. Testing with a wide range of $n$ and $b$ suggests the approximation is usually about this good. To wrap up, let's consider the original question: take $n = 10,000$ (the number of observations) and $b = 1\,000\,000$ (the number of possible "structures," approximately). The approximate distribution for the maximum number of "shared birthdays" is $$\eqalign{ k=1: &0 \\ k=2: &0.8475+\\ k=3: &0.1520+\\ k=4: &0.0004+\\ k\gt 4: &\lt 1E-6. }$$ (This is a fast calculation.) Clearly, observing one structure 10 times out of 10,000 would be highly significant. Because $n$ and $b$ are both large, I expect the approximation to work quite well here. Incidentally, as Shane intimated, simulations can provide useful checks. A Mathematica simulation is created with a function like simulate[n_, b_] := Max[Last[Transpose[Tally[RandomInteger[{0, b - 1}, n]]]]]; which is then iterated and summarized, as in this example which runs 10,000 iterations of the $n = 10000$, $b = 1\,000\,000$ case: Tally[Table[simulate[10000, 1000000], {n, 1, 10000}]] // TableForm Its output is 2 8503 3 1493 4 4 These frequencies closely agree with those predicted by the Poisson approximation.
I've sampled a real world process, network ping times. The "round-trip-time" is measured in milliseconds. Results are plotted in a histogram: Latency has a minimum value, but a long upper tail. I want to know what statistical distribution this is, and how to estimate its parameters. Even though the distribution is not a normal distribution, I can still show what I am trying to achieve. The normal distribution uses the function: with the two parameters μ (mean) σ2  (variance) Parameter estimation The formulas for estimating the two parameters are: Applying these formulas against the data I have in Excel, I get: μ = 10.9558 (mean) σ2  = 67.4578 (variance) With these parameters I can plot the "normal" distribution over top my sampled data: Obviously it's not a normal distribution. A normal distribution has an infinite top and bottom tail, and is symmetrical. This distribution is not symmetrical. What principles would I apply; what flowchart would I apply to determine what kind of distribution this is? Given that the distribution has no negative tail, and long positive tail: what distributions match that? Is there a reference that matches distributions to the observations you're taking? And cutting to the chase, what is the formula for this distribution, and what are the formulas to estimate its parameters? I want to get the distribution so I can get the "average" value, as well as the "spread": I am actually plotting the histogram in software, and I want to overlay the theoretical distribution: Note: Cross-posted from math.stackexchange.com Update: 160,000 samples: Months and months, and countless sampling sessions, all give the same distribution. There must be a mathematical representation. Harvey suggested putting the data on a log scale. Here's the probability density on a log scale: Tags: sampling, statistics, parameter-estimation, normal-distribution It's not an answer, but an addendum to the question. Here's the distribution buckets. I think the more adventurous person might like to paste them into Excel (or whatever program you know) and can discover the distribution. The values are normalized Time Value 53.5 1.86885613545469E-5 54.5 0.00396197500716395 55.5 0.0299702228922418 56.5 0.0506460012708222 57.5 0.0625879919763777 58.5 0.069683415770654 59.5 0.0729476844872482 60.5 0.0508017392821101 61.5 0.032667605247748 62.5 0.025080049337802 63.5 0.0224138145845533 64.5 0.019703973188144 65.5 0.0183895443728742 66.5 0.0172059354870862 67.5 0.0162839664602619 68.5 0.0151688822994406 69.5 0.0142780608748739 70.5 0.0136924859524314 71.5 0.0132751080821798 72.5 0.0121849420031646 73.5 0.0119419907055555 74.5 0.0117114984488494 75.5 0.0105528076448675 76.5 0.0104219877153857 77.5 0.00964952717939773 78.5 0.00879608287754009 79.5 0.00836624596638551 80.5 0.00813575370967943 81.5 0.00760001495084908 82.5 0.00766853967581576 83.5 0.00722624372375815 84.5 0.00692099722163388 85.5 0.00679017729215205 86.5 0.00672788208763689 87.5 0.00667804592402477 88.5 0.00670919352628235 89.5 0.00683378393531266 90.5 0.00612361860383988 91.5 0.00630427469693383 92.5 0.00621706141061261 93.5 0.00596788059255199 94.5 0.00573115881539439 95.5 0.0052950923837883 96.5 0.00490886211579433 97.5 0.00505214108617919 98.5 0.0045413204091549 99.5 0.00467214033863673 100.5 0.00439181191831853 101.5 0.00439804143877004 102.5 0.00432951671380337 103.5 0.00419869678432154 104.5 0.00410525397754881 105.5 0.00440427095922156 106.5 0.00439804143877004 107.5 0.00408656541619426 108.5 0.0040616473343882 109.5 0.00389345028219728 110.5 0.00392459788445485 111.5 0.0038249255572306 112.5 0.00405541781393668 113.5 0.00393705692535789 114.5 0.00391213884355182 115.5 0.00401804069122759 116.5 0.0039432864458094 117.5 0.00365672850503968 118.5 0.00381869603677909 119.5 0.00365672850503968 120.5 0.00340131816652754 121.5 0.00328918679840026 122.5 0.00317082590982146 123.5 0.00344492480968815 124.5 0.00315213734846692 125.5 0.00324558015523965 126.5 0.00277213660092446 127.5 0.00298394029627599 128.5 0.00315213734846692 129.5 0.0030649240621457 130.5 0.00299639933717902 131.5 0.00308984214395176 132.5 0.00300885837808206 133.5 0.00301508789853357 134.5 0.00287803844860023 135.5 0.00277836612137598 136.5 0.00287803844860023 137.5 0.00265377571234566 138.5 0.00267246427370021 139.5 0.0027472185191184 140.5 0.0029465631735669 141.5 0.00247311961925171 142.5 0.00259148050783051 143.5 0.00258525098737899 144.5 0.00259148050783051 145.5 0.0023485292102214 146.5 0.00253541482376687 147.5 0.00226131592390018 148.5 0.00239213585338201 149.5 0.00250426722150929 150.5 0.0026288576305396 151.5 0.00248557866015474 152.5 0.00267869379415173 153.5 0.00247311961925171 154.5 0.00232984064886685 155.5 0.00243574249654262 156.5 0.00242328345563958 157.5 0.00231738160796382 158.5 0.00256656242602444 159.5 0.00221770928073957 160.5 0.00241705393518807 161.5 0.00228000448525473 162.5 0.00236098825112443 163.5 0.00216787311712744 164.5 0.00197475798313046 165.5 0.00203705318764562 166.5 0.00209311887170926 167.5 0.00193115133996985 168.5 0.00177541332868196 169.5 0.00165705244010316 170.5 0.00160098675603952 171.5 0.00154492107197588 172.5 0.0011150841608213 173.5 0.00115869080398191 174.5 0.00107770703811221 175.5 0.000946887108630378 176.5 0.000853444301857643 177.5 0.000822296699600065 178.5 0.00072885389282733 179.5 0.000753771974633393 180.5 0.000766231015536424 181.5 0.000566886361087923 Bonus Reading What Is the Expected Distribution of Website Response Times? What Do You Mean? - Revisiting Statistics for Web Response Time Measurements Modeling Network Latency
[ "https://stats.stackexchange.com/questions/1315", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/775/" ]
4
HuggingFaceH4/stack-exchange-preferences
Let me ask a more basic question: what do you want to do with this distributional information? The reason I ask is because it may well make more sense to approximate the distribution with some sort of kernel density estimator, rather than insist that it fit into one of the (possibly shifted) exponential family distributions. You can answer almost all of the same sorts of questions that a standard distribution will let you answer, and you don't have to worry (as much) about whether you've selected the correct model. But if there's a fixed minimum time, and you must have some sort of compactly parameterized distribution to go with it, then just eyeballing it I'd subtract off the minimum and fit a gamma, like others have suggested.
What would be the best way to display changes in two scalar variables (x,y) over time (z), in one visualization? One idea that I had was to plot x and y both on the vertical axis, with z as the horizontal. Note: I'll be using R and likely ggplot2
[ "https://stats.stackexchange.com/questions/1321", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/776/" ]
4
HuggingFaceH4/stack-exchange-preferences
The other idea is to plot one series as x and the second as y -- the time dependency will be hidden, but this plots shows correlations pretty well. (Yet time can be shown to some extent by connecting points chronologically; if the series are quite short and continuous it should be readable.)
Well, we've got favourite statistics quotes. What about statistics jokes?
[ "https://stats.stackexchange.com/questions/1337", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/521/" ]
8
HuggingFaceH4/stack-exchange-preferences
A statistician's wife had twins. He was delighted. He rang the minister who was also delighted. "Bring them to church on Sunday and we'll baptize them," said the minister. "No," replied the statistician. "Baptize one. We'll keep the other as a control." STATS: The Magazine For Students of Statistics, Winter 1996, Number 15
I am working with a large data set (approximately 50K observations) and trying to running a Maximum likelihood estimation on 5 unknowns in Stata. I encountered an error message of "Numerical Overflow". How can I overcome this? I am trying to run a Stochastic Frontier analysis using the built in Stata command "frontier". The dependent variable is log of output and the independent variable is log of intermediate inputs, capital, labour, and utlities.
[ "https://stats.stackexchange.com/questions/1350", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/189/" ]
4
HuggingFaceH4/stack-exchange-preferences
After a day of searching, I found out that the issue was due to starting values. Thought I should just post the answer for future reference. The frontier command in Stata obtains its starting values using method of moments. The initial values might have produced negative infinity for the log likelihood. To get around the problem I needed to specify the starting values myself, which were obtained from a linear regression.
In an average (median?) conversation about statistics you will often find yourself discussing this or that method of analyzing this or that type of data. In my experience, careful study design with special thought with regards to the statistical analysis is often neglected (working in biology/ecology, this seems to be a prevailing occurrence). Statisticians often find themselves in a gridlock with insufficient (or outright wrong) collected data. To paraphrase Ronald Fisher, they are forced to do a post-mortem on the data, which often leads to weaker conclusions, if at all. I would like to know which references you use to construct a successful study design, preferably for a wide range of methods (e.g. t-test, GLM, GAM, ordination techniques...) that helps you avoid pitfalls mentioned above.
[ "https://stats.stackexchange.com/questions/1352", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/144/" ]
4
HuggingFaceH4/stack-exchange-preferences
I agree with the point that statistics consultants are often brought in later on a project when it's too late to remedy design flaws. It's also true that many statistics books give scant attention to study design issues. You say you want designs "preferably for a wide range of methods (e.g. t-test, GLM, GAM, ordination techniques...". I see designs as relatively independent of statistical method: e.g., experiments (between subjects and within subjects factors) versus observational studies; longitudinal versus cross-sectional; etc. There are also a lot of issues related to measurement, domain specific theoretical knowledge, and domain specific study design principles that need to be understood in order to design a good study. In terms of books, I'd be inclined to look at domain specific books. In psychology (where I'm from) this means books on psychometrics for measurement, a book on research methods, and a book on statistics, as well as a range of even more domain specific research method books. You might want to check out Research Methods Knowledge Base for a free online resource for the social sciences. Published journal articles are also a good guide to what is best practice in a particular domain.
I am looking for a robust version of Hotelling's $T^2$ test for the mean of a vector. As data, I have a $m\ \times\ n$ matrix, $X$, each row an i.i.d. sample of an $n$-dimensional RV, $x$. The null hypothesis I wish to test is $E[x] = \mu$, where $\mu$ is a fixed $n$-dimensional vector. The classical Hotelling test appears to be susceptible to non-normality in the distribution of $x$ (just as the 1-d analogue, the Student t-test is susceptible to skew and kurtosis). what is the state of the art robust version of this test? I am looking for something relatively fast and conceptually simple. There was a paper in COMPSTAT 2008 on the topic, but I do not have access to the proceedings. Any help?
[ "https://stats.stackexchange.com/questions/1376", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/795/" ]
4
HuggingFaceH4/stack-exchange-preferences
Sure: two answers a) If by robustness, you mean robust to outliers, then run Hottelling's T-test using a robust estimation of scale/scatter: you will find all the explications and R code here: http://www.statsravingmad.com/blog/statistics/a-robust-hotelling-test/ b) if by robustness you mean optimal under large group of distributions, then you should go for a sign based T2 (ask if this what you want, by the tone of your question i think not). PS: this is the paper you want; Roelant, E., Van Aelst, S., and Willems, G. (2008), “Fast Bootstrap for Robust Hotelling Tests,” COMPSTAT 2008: Proceedings in Computational Statistics (P. Brito, Ed.) Heidelberg: Physika-Verlag, to appear.
I am trying to test the null $E[X] = 0$, against the local alternative $E[X] > 0$, for a random variable $X$, subject to mild to medium skew and kurtosis of the random variable. Following suggestions by Wilcox in 'Introduction to Robust Estimation and Hypothesis Testing', I have looked at tests based on the trimmed mean, the median, as well as the M-estimator of location (Wilcox' "one-step" procedure). These robust tests do outperform the standard t-test, in terms of power, when testing with a distribution that is non-skewed, but leptokurtotic. However, when testing with a distribution that is skewed, these one-sided tests are either far too liberal or far too conservative under the null hypothesis, depending on whether the distribution is left- or right-skewed, respectively. For example, with 1000 observations, the test based on the median will actually reject ~40% of the time, at the nominal 5% level. The reason for this is obvious: for skewed distributions, the median and the mean are rather different. However, in my application, I really need to test the mean, not the median, not the trimmed mean. Is there a more robust version of the t-test that actually tests for the mean, but is impervious to skew and kurtosis? Ideally the procedure would work well in the no-skew, high-kurtosis case as well. The 'one-step' test is almost good enough, with the 'bend' parameter set relatively high, but it is less powerful than the trimmed mean tests when there is no skew, and has some troubles maintaining the nominal level of rejects under skew. background: the reason I really care about the mean, and not the median, is that the test would be used in a financial application. For example, if you wanted to test whether a portfolio had positive expected log returns, the mean is actually appropriate because if you invest in the portfolio, you will experience all the returns (which is the mean times the number of samples), instead of $n$ duplicates of the median. That is, I really care about the sum of $n$ draws from the R.V. $X$.
[ "https://stats.stackexchange.com/questions/1386", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/795/" ]
4
HuggingFaceH4/stack-exchange-preferences
Why are you looking at non-parametric tests? Are the assumptions of the t-test violated? Namely, ordinal or non-normal data and inconstant variances? Of course, if your sample is large enough you can justify the parametric t-test with its greater power despite the lack of normality in the sample. Likewise if your concern is unequal variances, there are corrections to the parametric test that yield accurate p-values (the Welch correction). Otherwise, comparing your results to the t-test is not a good way to go about this, because the t-test results are biased when the assumptions are not met. The Mann-Whitney U is an appropriate non-parametric alternative, if that's what you really need. You only lose power if you are using the non-parametric test when you could justifiably use the t-test (because the assumptions are met). And, just for some more background, go here: Student's t Test for Independent Samples.
I came across an error of numerical overflow when running a maximum likelihood estimation on a log-linear specification. What does numerical overflow mean?
[ "https://stats.stackexchange.com/questions/1389", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/189/" ]
4
HuggingFaceH4/stack-exchange-preferences
It means that the algorithm generated a variable that is greater than the maximum allowed for that type of variable. That is due to the fact that computers use a finite number of bits to represent numbers, so it is not possible to represent ANY number, but only a limited subset of them. The actual value depends on the type of variable and the architecture of the system. Why that happens during a MLE I'm not sure, my best call would be that you should change the starting parameters.
Can anyone recommend me an open source graphic library to create forest and funnel plots? I was aiming at using it on a Java desktop application.
[ "https://stats.stackexchange.com/questions/1395", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/807/" ]
4
HuggingFaceH4/stack-exchange-preferences
Well, i use graphviz, which has Java bindings (Grappa). Although the dot language (graphviz's syntax) is simple, i prefer to use graphviz as a library through the excellent and production-stable python bindings, pygraphviz, and networkx. Here's the code for a simple 'funnel diagram' using those tools; it's not the most elaborate diagram, but it is complete--it initializes the graph object, creates all of the necessary components, styles them, renders the graph, and writes it to file. import networkx as NX import pygraphviz as PV G = PV.AGraph(strict=False, directed=True) # initialize graph object # create graph components: node_list = ["Step1", "Step2", "Step3", "Step4"] edge_list = [("Step1, Step2"), ("Step2", "Step3"), ("Step3", "Step4")] G.add_nodes_from(node_list) G.add_edge("Step1", "Step2") G.add_edge("Step2", "Step3") G.add_edge("Step3", "Step4") # style them: nak = "fontname fontsize fontcolor shape style fill color size".split() nav = "Arial 11 white invtrapezium filled cornflowerblue cornflowerblue 1.4".split() nas = dict(zip(nak, nav)) for k, v in nas.iteritems() : G.node_attr[k] = v eak = "fontname fontsize fontcolor dir arrowhead arrowsize arrowtail".split() eav = "Arial 10 red4 forward normal 0.8 inv".split() eas = dict(zip(eak, eav)) for k, v in eas.iteritems() : G.edge_attr[k] = v n1 = G.get_node("Step1") n1.attr['fontsize'] = '11' n1.attr['fontcolor'] = 'red4' n1.attr['label'] = '1411' n1.attr['shape'] = 'rectangle' n1.attr['width'] = '1.4' n1.attr['height'] = '0.05' n1.attr['color'] = 'firebrick4' n4 = G.get_node("Step4") n4.attr['shape'] = 'rectangle' # it's simple to scale graph features to indicate 'flow' conditions, e.g., scale # each container size based on how many items each holds in a given time snapshot: # (instead of setting node attribute ('width') to a static quantity, you would # just bind 'n1.attr['width']' to a variable such as 'total_from_container_1' n1 = G.get_node("Step2") n1.attr['width'] = '2.4' # likewise, you can do the same with edgewidth (i.e., make the arrow thicker # to indicate higher 'flow rate') e1 = G.get_edge("Step1", "Step2") e1.attr['label'] = ' 1411' e1.attr['penwidth'] = 2.6 # and you can easily add labels to the nodes and edges to indicate e.g., quantities: e1 = G.get_edge("Step2", "Step3") e1.attr['label'] = ' 392' G.write("conv_fnl.dot") # save the dot file G.draw("conv_fnl.png") # save the rendered diagram alt text http://a.imageshack.us/img148/390/convfunnel.png
I'm interested in obtaining a bootstrapped confidence interval on quantity X, when this quantity is measured 10 times in each of 10 individuals. One approach is to obtain the mean per individual, then bootstrap the means (eg. resample the means with replacement). Another approach is to do the following on each iteration of the bootstrapping procedure: within each individual, resample that individual's 10 observations with replacement, then compute a new mean for that individual, and finally compute a new group mean. In this approach, each individual observed in the original data set always contribute to the group mean on each iteration of the bootstrap procedure. Finally, a third approach is to combine the above two approaches: resample individuals then resample within those individuals. This approach differs from the preceding approach in that it permits the same individual to contribute multiply to the group mean on each iteration, though because each contribution is generated via an independent resampling procedure, these contributions may be expected to vary slightly from eachother. In practice, I find that these approaches yield different estimates for the confidence interval (ex. with one data set, I find that the third approach yields much larger confidence intervals than the first two approaches), so I'm curious what each might be interpreted to represent.
[ "https://stats.stackexchange.com/questions/1399", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/364/" ]
4
HuggingFaceH4/stack-exchange-preferences
Your first approach is about a between S CI. If you wanted to measure within S then that's the wrong approach. The second approach would generate a within S CI that would only apply to those 10 individuals. The last approach is the correct one for the within S CI. Any increases in the CI are because your CI is more representative of a CI that could be applied to the population instead of those 10 S's.
In answering this question John Christie suggested that the fit of logistic regression models should be assessed by evaluating the residuals. I'm familiar with how to interpret residuals in OLS, they are in the same scale as the DV and very clearly the difference between y and the y predicted by the model. However for logistic regression, in the past I've typically just examined estimates of model fit, e.g. AIC, because I wasn't sure what a residual would mean for a logistic regression. After looking into R's help files a little bit I see that in R there are five types of glm residuals available, c("deviance", "pearson", "working","response", "partial"). The help file refers to: Davison, A. C. and Snell, E. J. (1991) Residuals and diagnostics. In: Statistical Theory and Modelling. In Honour of Sir David Cox, FRS, eds. Hinkley, D. V., Reid, N. and Snell, E. J., Chapman & Hall. I do not have a copy of that. Is there a short way to describe how to interpret each of these types? In a logistic context will sum of squared residuals provide a meaningful measure of model fit or is one better off with an Information Criterion?
[ "https://stats.stackexchange.com/questions/1432", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/196/" ]
7
HuggingFaceH4/stack-exchange-preferences
The easiest residuals to understand are the deviance residuals as when squared these sum to -2 times the log-likelihood. In its simplest terms logistic regression can be understood in terms of fitting the function $p = \text{logit}^{-1}(X\beta)$ for known $X$ in such a way as to minimise the total deviance, which is the sum of squared deviance residuals of all the data points. The (squared) deviance of each data point is equal to (-2 times) the logarithm of the difference between its predicted probability $\text{logit}^{-1}(X\beta)$ and the complement of its actual value (1 for a control; a 0 for a case) in absolute terms. A perfect fit of a point (which never occurs) gives a deviance of zero as log(1) is zero. A poorly fitting point has a large residual deviance as -2 times the log of a very small value is a large number. Doing logistic regression is akin to finding a beta value such that the sum of squared deviance residuals is minimised. This can be illustrated with a plot, but I don't know how to upload one.
If I have highly skewed positive data I often take logs. But what should I do with highly skewed non-negative data that include zeros? I have seen two transformations used: $\log(x+1)$ which has the neat feature that 0 maps to 0. $\log(x+c)$ where c is either estimated or set to be some very small positive value. Are there any other approaches? Are there any good reasons to prefer one approach over the others?
[ "https://stats.stackexchange.com/questions/1444", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/159/" ]
7
HuggingFaceH4/stack-exchange-preferences
It seems to me that the most appropriate choice of transformation is contingent on the model and the context. The '0' point can arise from several different reasons each of which may have to be treated differently: Truncation (as in Robin's example): Use appropriate models (e.g., mixtures, survival models etc) Missing data: Impute data / Drop observations if appropriate. Natural zero point (e.g., income levels; an unemployed person has zero income): Transform as needed Sensitivity of measuring instrument: Perhaps, add a small amount to data? I am not really offering an answer as I suspect there is no universal, 'correct' transformation when you have zeros.
I want to fully grasp the notion of $r^2$ describing the amount of variation between variables. Every web explanation is a bit mechanical and obtuse. I want to "get" the concept, not just mechanically use the numbers. E.g.: Hours studied vs. test score $r$ = .8 $r^2$ = .64 So, what does this mean? 64% of the variability of test scores can be explained by hours? How do we know that just by squaring?
[ "https://stats.stackexchange.com/questions/1447", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/6967/" ]
6
HuggingFaceH4/stack-exchange-preferences
Start with the basic idea of variation. Your beginning model is the sum of the squared deviations from the mean. The R^2 value is the proportion of that variation that is accounted for by using an alternative model. For example, R-squared tells you how much of the variation in Y you can get rid of by summing up the squared distances from a regression line, rather than the mean. I think this is made perfectly clear if we think about the simple regression problem plotted out. Consider a typical scatterplot where you have a predictor X along the horizontal axis and a response Y along the vertical axis. The mean is a horizontal line on the plot where Y is constant. The total variation in Y is the sum of squared differences between the mean of Y and each individual data point. It's the distance between the mean line and every individual point squared and added up. You can also calculate another measure of variability after you have the regression line from the model. This is the difference between each Y point and the regression line. Rather than each (Y - the mean) squared we get (Y - the point on the regression line) squared. If the regression line is anything but horizontal, we're going to get less total distance when we use this fitted regression line rather than the mean--that is there is less unexplained variation. The ratio between the extra variation explained and the original variation is your R^2. It's the proportion of the original variation in your response that is explained by fitting that regression line. Here is some R code for a graph with the mean, the regression line, and segments from the regression line to each point to help visualize: library(ggplot2) data(faithful) plotdata <- aggregate( eruptions ~ waiting , data = faithful, FUN = mean) linefit1 <- lm(eruptions ~ waiting, data = plotdata) plotdata$expected <- predict(linefit1) plotdata$sign <- residuals(linefit1) > 0 p <- ggplot(plotdata, aes(y=eruptions, x=waiting, xend=waiting, yend=expected) ) p + geom_point(shape = 1, size = 3) + geom_smooth(method=lm, se=FALSE) + geom_segment(aes(y=eruptions, x=waiting, xend=waiting, yend=expected, colour = sign), data = plotdata) + theme(legend.position="none") + geom_hline(yintercept = mean(plotdata$eruptions), size = 1)
I have a bunch of articles presenting "OR" with a- 95% CI (confidence intervals). I want to estimate from the articles the P value for the observed OR. For that, I need an assumption regarding the OR distribution. What distribution can I safely assume/use?
[ "https://stats.stackexchange.com/questions/1455", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/253/" ]
5
HuggingFaceH4/stack-exchange-preferences
The log odds ratio has a Normal asymptotic distribution : $\log(\hat{OR}) \sim N(\log(OR), \sigma_{\log(OR)}^2)$ with $\sigma$ estimated from the contingency table. See, for example, page 6 of the notes: Asymptotic Theory for Parametric Models
I find it hard to understand what really is the issue with multiple comparisons. With a simple analogy, it is said that a person who will make many decisions will make many mistakes. So very conservative precaution is applied, like Bonferroni correction, so as to make the probability that, this person will make any mistake at all, as low as possible. But why do we care about whether the person has made any mistake at all among all decisions he/she has made, rather than the percentage of the wrong decisions? Let me try to explain what confuses me with another analogy. Suppose there are two judges, one is 60 years old, and the other is 20 years old. Then Bonferroni correction tells the one which is 20 years old to be as conservative as possible, in deciding for execution, because he will work for many more years as a judge, will make many more decisions, so he has to be careful. But the one at 60 years old will possibly retire soon, will make fewer decisions, so he can be more careless compared to the other. But actually, both judges should be equally careful or conservative, regardless of the total number of decisions they will make. I think this analogy more or less translates to the real problems where Bonferroni correction is applied, which I find counterintuitive.
[ "https://stats.stackexchange.com/questions/1458", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/148/" ]
5
HuggingFaceH4/stack-exchange-preferences
You've stated something that is a classic counter argument to Bonferroni corrections. Shouldn't I adjust my alpha criterion based on every test I will ever make? This kind of ad absurdum implication is why some people do not believe in Bonferroni style corrections at all. Sometimes the kind of data one deals with in their career is such that this is not an issue. For judges who make one, or very few decisions on each new piece of evidence this is a very valid argument. But what about the judge with 20 defendants and who is basing their judgment on a single large set of data (e.g. war tribunals)? You're ignoring the kicks at the can part of the argument. Generally scientists are looking for something — a p-value less than alpha. Every attempt to find one is another kick at the can. One will eventually find one if one takes enough shots at it. Therefore, they should be penalized for doing that. The way you harmonize these two arguments is to realize they are both true. The simplest solution is to consider testing of differences within a single dataset as a kicks at the can kind of problem but that expanding the scope of correction outside that would be a slippery slope. This is a genuinely difficult problem in a number of fields, notably FMRI where there are thousands of data points being compared and there are bound to be some come up as significant by chance. Given that the field has been historically very exploratory one has to do something to correct for the fact that hundreds of areas of the brain will look significant purely by chance. Therefore, many methods of adjustment of criterion have been developed in that field. On the other hand, in some fields one might at most be looking at 3 to 5 levels of a variable and always just test every combination if a significant ANOVA occurs. This is known to have some problems (type 1 errors) but it's not particularly terrible. It depends on your point of view. The FMRI researcher recognizes a real need for a criterion shift. The person looking at a small ANOVA may feel that there's clearly something there from the test. The proper conservative point of view on the multiple comparisons is to always do something about them but only based on a single dataset. Any new data resets the criterion... unless you're a Bayesian...
Let's say I want to make a football simulator based on real-life data. Say I have a player who averages 5.3 yards per carry with a SD of 1.7 yards. I'd like to generate a random variable that simulates the next few plays. eg: 5.7, 4.9, 5.3, etc. What stats terms to I need to look up to pursue this idea? Density function? The normal curve estimates what boundaries the data generally fall within, but how do I translate that into simulation of subsequent data points? Thanks for any guidance!
[ "https://stats.stackexchange.com/questions/1462", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/6967/" ]
4
HuggingFaceH4/stack-exchange-preferences
Of course you can use rnorm() in R, but it may be easier to understand how drawing from a pdf works by using the probability integral transform. Basically, once we specify the structure of the pdf, we can transform this into a cdf (empirically, to ignore what the equation is), and because the values of the cdf have unique values from 0 to 1, we can back-calculate a draw from the original pdf by matching random draws from 0 to 1, with the cdf. This way, you only need to have a RNG from 0 to 1, and the function of the pdf, and you're set. Here is the R code: x <- seq(-4, 4, len = 1000) f <- function(x, mu = 0, sigma = 1) { out <- 1 / sqrt(2*pi*sigma^2) * exp(-(x - mu)^2 / (2*sigma^2)) out } x.ecdf <- cumsum(f(x)) / sum(f(x)) out <- vector() y <- runif(100) for (i in 1:length(y)) { out[i] <- which((y[i] - x.ecdf)^2 == min((y[i] - x.ecdf)^2)) } par(mfrow = c(1,2)) plot(x, x.ecdf) hist(x[out], breaks = 20) alt text http://probabilitynotes.files.wordpress.com/2010/08/rnormish.png
I need to analyze with R the data from a medical survey (with 100+ coded columns) that comes in a CSV. I will use rattle for some initial analysis but behind the scenes it's still R. If I read.csv() the file, columns with numerical codes are treated as numerical data. I'm aware I could create categorical columns from them with factor() but doing it for 100+ columns is a pain. I hope there is a better way to tell R to import the columns directly as factors. Or to at least to convert them in place afterwards. Thank you!
[ "https://stats.stackexchange.com/questions/1471", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/840/" ]
5
HuggingFaceH4/stack-exchange-preferences
You can use the colClasses argument to specify the classes of your data columns. For example: data <- read.csv('foo.csv', colClasses=c('numeric', 'factor', 'factor')) will assign numeric to the first column, factor to the second and third. Since you have so many columns, a shortcut might be: data <- read.csv('foo.csv', colClasses=c('numeric', rep('factor', 37), 'character')) or some such variation (i.e. assign numeric to first column, factor to next 37 columns, then character to the last one).
I want to cluster ~22000 points. Many clustering algorithms work better with higher quality initial guesses. What tools exist that can give me a good idea of the rough shape of the data? I do want to be able to choose my own distance metric, so a program I can feed a list of pairwise distances to would be just fine. I would like to be able to do something like highlight a region or cluster on the display and get a list of which data points are in that area. Free software preferred, but I do already have SAS and MATLAB.
[ "https://stats.stackexchange.com/questions/1475", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ]
5
HuggingFaceH4/stack-exchange-preferences
GGobi (http://www.ggobi.org/), along with the R package rggobi, is perfectly suited to this task. See the related presentation for examples: http://www.ggobi.org/book/2007-infovis/05-clustering.pdf