Dataset Viewer
question
stringlengths 18
38.8k
| source
sequencelengths 3
3
| score
int64 4
12
| dataset
stringclasses 1
value | answer
stringlengths 0
28.8k
|
|---|---|---|---|---|
How should I elicit prior distributions from experts when fitting a Bayesian model?
|
[
"https://stats.stackexchange.com/questions/1",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8/"
] | 6
|
HuggingFaceH4/stack-exchange-preferences
|
John Cook gives some interesting recommendations. Basically, get percentiles/quantiles (not means or obscure scale parameters!) from the experts, and fit them with the appropriate distribution.
http://www.johndcook.com/blog/2010/01/31/parameters-from-percentiles/
|
In many different statistical methods there is an "assumption of normality". What is "normality" and how do I know if there is normality?
|
[
"https://stats.stackexchange.com/questions/2",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/24/"
] | 6
|
HuggingFaceH4/stack-exchange-preferences
|
The assumption of normality is just the supposition that the underlying random variable of interest is distributed normally, or approximately so. Intuitively, normality may be understood as the result of the sum of a large number of independent random events.
More specifically, normal distributions are defined by the following function:
$$ f(x) =\frac{1}{\sqrt{2\pi\sigma^2}}e^{ -\frac{(x-\mu)^2}{2\sigma^2} },$$
where $\mu$ and $\sigma^2$ are the mean and the variance, respectively, and which appears as follows:
This can be checked in multiple ways, that may be more or less suited to your problem by its features, such as the size of n. Basically, they all test for features expected if the distribution were normal (e.g. expected quantile distribution).
|
What are some valuable Statistical Analysis open source projects available right now?
Edit: as pointed out by Sharpie, valuable could mean helping you get things done faster or more cheaply.
|
[
"https://stats.stackexchange.com/questions/3",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/18/"
] | 8
|
HuggingFaceH4/stack-exchange-preferences
|
The R-project
http://www.r-project.org/
R is valuable and significant because it was the first widely-accepted Open-Source alternative to big-box packages. It's mature, well supported, and a standard within many scientific communities.
Some reasons why it is useful and valuable
There are some nice tutorials here.
|
I have two groups of data. Each with a different distribution of multiple variables. I'm trying to determine if these two groups' distributions are different in a statistically significant way. I have the data in both raw form and binned up in easier to deal with discrete categories with frequency counts in each.
What tests/procedures/methods should I use to determine whether or not these two groups are significantly different and how do I do that in SAS or R (or Orange)?
|
[
"https://stats.stackexchange.com/questions/4",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/23/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
I believe that this calls for a two-sample Kolmogorov–Smirnov test, or the like. The two-sample Kolmogorov–Smirnov test is based on comparing differences in the empirical distribution functions (ECDF) of two samples, meaning it is sensitive to both location and shape of the the two samples. It also generalizes out to a multivariate form.
This test is found in various forms in different packages in R, so if you are basically proficient, all you have to do is install one of them (e.g. fBasics), and run it on your sample data.
|
Last year, I read a blog post from Brendan O'Connor entitled "Statistics vs. Machine Learning, fight!" that discussed some of the differences between the two fields. Andrew Gelman responded favorably to this:
Simon Blomberg:
From R's fortunes
package: To paraphrase provocatively,
'machine learning is statistics minus
any checking of models and
assumptions'.
-- Brian D. Ripley (about the difference between machine learning
and statistics) useR! 2004, Vienna
(May 2004) :-) Season's Greetings!
Andrew Gelman:
In that case, maybe we should get rid
of checking of models and assumptions
more often. Then maybe we'd be able to
solve some of the problems that the
machine learning people can solve but
we can't!
There was also the "Statistical Modeling: The Two Cultures" paper by Leo Breiman in 2001 which argued that statisticians rely too heavily on data modeling, and that machine learning techniques are making progress by instead relying on the predictive accuracy of models.
Has the statistics field changed over the last decade in response to these critiques? Do the two cultures still exist or has statistics grown to embrace machine learning techniques such as neural networks and support vector machines?
|
[
"https://stats.stackexchange.com/questions/6",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/5/"
] | 8
|
HuggingFaceH4/stack-exchange-preferences
|
I think the answer to your first question is simply in the affirmative. Take any issue of Statistical Science, JASA, Annals of Statistics of the past 10 years and you'll find papers on boosting, SVM, and neural networks, although this area is less active now. Statisticians have appropriated the work of Valiant and Vapnik, but on the other side, computer scientists have absorbed the work of Donoho and Talagrand. I don't think there is much difference in scope and methods any more. I have never bought Breiman's argument that CS people were only interested in minimizing loss using whatever works. That view was heavily influenced by his participation in Neural Networks conferences and his consulting work; but PAC, SVMs, Boosting have all solid foundations. And today, unlike 2001, Statistics is more concerned with finite-sample properties, algorithms and massive datasets.
But I think that there are still three important differences that are not going away soon.
Methodological Statistics papers are still overwhelmingly formal and deductive, whereas Machine Learning researchers are more tolerant of new approaches even if they don't come with a proof attached;
The ML community primarily shares new results and publications in conferences and related proceedings, whereas statisticians use journal papers. This slows down progress in Statistics and identification of star researchers. John Langford has a nice post on the subject from a while back;
Statistics still covers areas that are (for now) of little concern to ML, such as survey design, sampling, industrial Statistics etc.
|
I've been working on a new method for analyzing and parsing datasets to identify and isolate subgroups of a population without foreknowledge of any subgroup's characteristics. While the method works well enough with artificial data samples (i.e. datasets created specifically for the purpose of identifying and segregating subsets of the population), I'd like to try testing it with live data.
What I'm looking for is a freely available (i.e. non-confidential, non-proprietary) data source. Preferably one containing bimodal or multimodal distributions or being obviously comprised of multiple subsets that cannot be easily pulled apart via traditional means. Where would I go to find such information?
|
[
"https://stats.stackexchange.com/questions/7",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/38/"
] | 6
|
HuggingFaceH4/stack-exchange-preferences
|
Also see the UCI machine learning Data Repository.
http://archive.ics.uci.edu/ml/
|
Many studies in the social sciences use Likert scales. When is it appropriate to use Likert data as ordinal and when is it appropriate to use it as interval data?
|
[
"https://stats.stackexchange.com/questions/10",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/24/"
] | 7
|
HuggingFaceH4/stack-exchange-preferences
|
Maybe too late but I add my answer anyway...
It depends on what you intend to do with your data: If you are interested in showing that scores differ when considering different group of participants (gender, country, etc.), you may treat your scores as numeric values, provided they fulfill usual assumptions about variance (or shape) and sample size. If you are rather interested in highlighting how response patterns vary across subgroups, then you should consider item scores as discrete choice among a set of answer options and look for log-linear modeling, ordinal logistic regression, item-response models or any other statistical model that allows to cope with polytomous items.
As a rule of thumb, one generally considers that having 11 distinct points on a scale is sufficient to approximate an interval scale (for interpretation purpose, see @xmjx's comment)). Likert items may be regarded as true ordinal scale, but they are often used as numeric and we can compute their mean or SD. This is often done in attitude surveys, although it is wise to report both mean/SD and % of response in, e.g. the two highest categories.
When using summated scale scores (i.e., we add up score on each item to compute a "total score"), usual statistics may be applied, but you have to keep in mind that you are now working with a latent variable so the underlying construct should make sense! In psychometrics, we generally check that (1) unidimensionnality of the scale holds, (2) scale reliability is sufficient. When comparing two such scale scores (for two different instruments), we might even consider using attenuated correlation measures instead of classical Pearson correlation coefficient.
Classical textbooks include:
1. Nunnally, J.C. and Bernstein, I.H. (1994). Psychometric Theory (3rd ed.). McGraw-Hill Series in Psychology.
2. Streiner, D.L. and Norman, G.R. (2008). Health Measurement Scales. A practical guide to their development and use (4th ed.). Oxford.
3. Rao, C.R. and Sinharay, S., Eds. (2007). Handbook of Statistics, Vol. 26: Psychometrics. Elsevier Science B.V.
4. Dunn, G. (2000). Statistics in Psychiatry. Hodder Arnold.
You may also have a look at Applications of latent trait and latent class models in the social sciences, from Rost & Langeheine, and W. Revelle's website on personality research.
When validating a psychometric scale, it is important to look at so-called ceiling/floor effects (large asymmetry resulting from participants scoring at the lowest/highest response category), which may seriously impact on any statistics computed when treating them as numeric variable (e.g., country aggregation, t-test). This raises specific issues in cross-cultural studies since it is known that overall response distribution in attitude or health surveys differ from one country to the other (e.g. chinese people vs. those coming from western countries tend to highlight specific response pattern, the former having generally more extreme scores at the item level, see e.g. Song, X.-Y. (2007) Analysis of multisample structural equation models with applications to Quality of Life data, in Handbook of Latent Variable and Related Models, Lee, S.-Y. (Ed.), pp 279-302, North-Holland).
More generally, you should look at the psychometric-related literature which makes extensive use of Likert items if you are interested with measurement issue. Various statistical models have been developed and are currently headed under the Item Response Theory framework.
|
How would you describe in plain English the characteristics that distinguish Bayesian from Frequentist reasoning?
|
[
"https://stats.stackexchange.com/questions/22",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/66/"
] | 8
|
HuggingFaceH4/stack-exchange-preferences
|
Here is how I would explain the basic difference to my grandma:
I have misplaced my phone somewhere in the home. I can use the phone locator on the base of the instrument to locate the phone and when I press the phone locator the phone starts beeping.
Problem: Which area of my home should I search?
Frequentist Reasoning
I can hear the phone beeping. I also have a mental model which helps me identify the area from which the sound is coming. Therefore, upon hearing the beep, I infer the area of my home I must search to locate the phone.
Bayesian Reasoning
I can hear the phone beeping. Now, apart from a mental model which helps me identify the area from which the sound is coming from, I also know the locations where I have misplaced the phone in the past. So, I combine my inferences using the beeps and my prior information about the locations I have misplaced the phone in the past to identify an area I must search to locate the phone.
|
How can I find the PDF (probability density function) of a distribution given the CDF (cumulative distribution function)?
|
[
"https://stats.stackexchange.com/questions/23",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/69/"
] | 6
|
HuggingFaceH4/stack-exchange-preferences
|
As user28 said in comments above, the pdf is the first derivative of the cdf for a continuous random variable, and the difference for a discrete random variable.
In the continuous case, wherever the cdf has a discontinuity the pdf has an atom. Dirac delta "functions" can be used to represent these atoms.
|
What modern tools (Windows-based) do you suggest for modeling financial time series?
|
[
"https://stats.stackexchange.com/questions/25",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/69/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
I recommend R (see the time series view on CRAN).
Some useful references:
Econometrics in R, by Grant Farnsworth
Multivariate time series modelling in R
|
What is a standard deviation, how is it calculated and what is its use in statistics?
|
[
"https://stats.stackexchange.com/questions/26",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/75/"
] | 6
|
HuggingFaceH4/stack-exchange-preferences
|
Standard deviation is a number that represents the "spread" or "dispersion" of a set of data. There are other measures for spread, such as range and variance.
Here are some example sets of data, and their standard deviations:
[1,1,1] standard deviation = 0 (there's no spread)
[-1,1,3] standard deviation = 1.6 (some spread)
[-99,1,101] standard deviation = 82 (big spead)
The above data sets have the same mean.
Deviation means "distance from the mean".
"Standard" here means "standardized", meaning the standard deviation and mean are in the same units, unlike variance.
For example, if the mean height is 2 meters, the standard deviation might be 0.3 meters, whereas the variance would be 0.09 meters squared.
It is convenient to know that at least 75% of the data points always lie within 2 standard deviations of the mean (or around 95% if the distribution is Normal).
For example, if the mean is 100, and the standard deviation is 15, then at least 75% of the values are between 70 and 130.
If the distribution happens to be Normal, then 95% of the values are between 70 and 130.
Generally speaking, IQ test scores are normally distributed and have an average of 100. Someone who is "very bright" is two standard deviations above the mean, meaning an IQ test score of 130.
|
Which methods are used for testing random variate generation algorithms?
|
[
"https://stats.stackexchange.com/questions/30",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/69/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
The Diehard Test Suite is something close to a Golden Standard for testing random number generators. It includes a number of tests where a good random number generator should produce result distributed according to some know distribution against which the outcome using the tested generator can then be compared.
EDIT
I have to update this since I was not exactly right:
Diehard might still be used a lot, but it is no longer maintained and not state-of-the-art anymore. NIST has come up with a set of improved tests since.
|
After taking a statistics course and then trying to help fellow students, I noticed one subject that inspires much head-desk banging is interpreting the results of statistical hypothesis tests. It seems that students easily learn how to perform the calculations required by a given test but get hung up on interpreting the results. Many computerized tools report test results in terms of "p values" or "t values".
How would you explain the following points to college students taking their first course in statistics:
What does a "p-value" mean in relation to the hypothesis being tested? Are there cases when one should be looking for a high p-value or a low p-value?
What is the relationship between a p-value and a t-value?
|
[
"https://stats.stackexchange.com/questions/31",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/13/"
] | 8
|
HuggingFaceH4/stack-exchange-preferences
|
Understanding $p$-value
Suppose, that you want to test the hypothesis that the average height of male students at your University is $5$ ft $7$ inches. You collect heights of $100$ students selected at random and compute the sample mean (say it turns out to be $5$ ft $9$ inches). Using an appropriate formula/statistical routine you compute the $p$-value for your hypothesis and say it turns out to be $0.06$.
In order to interpret $p=0.06$ appropriately, we should keep several things in mind:
The first step under classical hypothesis testing is the assumption that the hypothesis under consideration is true. (In our context, we assume that the true average height is $5$ ft $7$ inches.)
Imagine doing the following calculation: Compute the probability that the sample mean is greater than $5$ ft $9$ inches assuming that our hypothesis is in fact correct (see point 1).
In other words, we want to know $$\mathrm{P}(\mathrm{Sample\: mean} \ge 5 \:\mathrm{ft} \:9 \:\mathrm{inches} \:|\: \mathrm{True\: value} = 5 \:\mathrm{ft}\: 7\: \mathrm{inches}).$$
The calculation in step 2 is what is called the $p$-value. Therefore, a $p$-value of $0.06$ would mean that if we were to repeat our experiment many, many times (each time we select $100$ students at random and compute the sample mean) then $6$ times out of $100$ we can expect to see a sample mean greater than or equal to $5$ ft $9$ inches.
Given the above understanding, should we still retain our assumption that our hypothesis is true (see step 1)? Well, a $p=0.06$ indicates that one of two things have happened:
(A) Either our hypothesis is correct and an extremely unlikely event has occurred (e.g., all $100$ students are student athletes)
or
(B) Our assumption is incorrect and the sample we have obtained is not that unusual.
The traditional way to choose between (A) and (B) is to choose an arbitrary cut-off for $p$. We choose (A) if $p > 0.05$ and (B) if $p < 0.05$.
|
What R packages should I install for seasonality analysis?
|
[
"https://stats.stackexchange.com/questions/33",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/69/"
] | 4
|
HuggingFaceH4/stack-exchange-preferences
|
You don't need to install any packages because this is possible with base-R functions. Have a look at the arima function.
This is a basic function of Box-Jenkins analysis, so you should consider reading one of the R time series text-books for an overview; my favorite is Shumway and Stoffer. "Time Series Analysis and Its Applications: With R Examples".
|
I have a data set that I'd expect to follow a Poisson distribution, but it is overdispersed by about 3-fold. At the present, I'm modelling this overdispersion using something like the following code in R.
## assuming a median value of 1500
med = 1500
rawdist = rpois(1000000,med)
oDdist = rawDist + ((rawDist-med)*3)
Visually, this seems to fit my empirical data very well. If I'm happy with the fit, is there any reason that I should be doing something more complex, like using a negative binomial distribution, as described here? (If so, any pointers or links on doing so would be much appreciated).
Oh, and I'm aware that this creates a slightly jagged distribution (due to the multiplication by three), but that shouldn't matter for my application.
Update: For the sake of anyone else who searches and finds this question, here's a simple R function to model an overdispersed poisson using a negative binomial distribution. Set d to the desired mean/variance ratio:
rpois.od<-function (n, lambda,d=1) {
if (d==1)
rpois(n, lambda)
else
rnbinom(n, size=(lambda/(d-1)), mu=lambda)
}
(via the R mailing list: https://stat.ethz.ch/pipermail/r-help/2002-June/022425.html)
|
[
"https://stats.stackexchange.com/questions/35",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/54/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
for overdispersed poisson, use the negative binomial, which allows you to parameterize the variance as a function of the mean precisely. rnbinom(), etc. in R.
|
There is an old saying: "Correlation does not mean causation". When I teach, I tend to use the following standard examples to illustrate this point:
number of storks and birth rate in Denmark;
number of priests in America and alcoholism;
in the start of the 20th century it was noted that there was a strong correlation between 'Number of radios' and 'Number of people in Insane Asylums'
and my favorite: pirates cause global warming.
However, I do not have any references for these examples and whilst amusing, they are obviously false.
Does anyone have any other good examples?
|
[
"https://stats.stackexchange.com/questions/36",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
It might be useful to explain that "causes" is an asymmetric relation (X causes Y is different from Y causes X), whereas "is correlated with" is a symmetric relation.
For instance, homeless population and crime rate might be correlated, in that both tend to be high or low in the same locations. It is equally valid to say that homelesss population is correlated with crime rate, or crime rate is correlated with homeless population. To say that crime causes homelessness, or homeless populations cause crime are different statements. And correlation does not imply that either is true. For instance, the underlying cause could be a 3rd variable such as drug abuse, or unemployment.
The mathematics of statistics is not good at identifying underlying causes, which requires some other form of judgement.
|
What algorithms are used in modern and good-quality random number generators?
|
[
"https://stats.stackexchange.com/questions/40",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/69/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
In R, the default setting for random number generation are:
For U(0,1), use the Mersenne-Twister algorithm
For Guassian numbers use the numerical inversion of the standard normal distribution function.
You can easily check this, viz.
> RNGkind()
[1] "Mersenne-Twister" "Inversion"
It is possible to change the default generator to other PRNGs, such as Super-Duper,Wichmann-Hill, Marsaglia-Multicarry or even a user-supplied PRNG. See the ?RNGkind for further details. I have never needed to change the default PRNG.
The C GSL library also uses the Mersenne-Twister by default.
|
How would you explain data visualization and why it is important to a layman?
|
[
"https://stats.stackexchange.com/questions/44",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/68/"
] | 4
|
HuggingFaceH4/stack-exchange-preferences
|
When I teach very basic statistics to Secondary School Students I talk about evolution and how we have evolved to spot patterns in pictures rather than lists of numbers and that data visualisation is one of the techniques we use to take advantage of this fact.
Plus I try to talk about recent news stories where statistical insight contradicts what the press is implying, making use of sites like Gapminder to find the representation before choosing the story.
|
What do they mean when they say "random variable"?
|
[
"https://stats.stackexchange.com/questions/50",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/62/"
] | 7
|
HuggingFaceH4/stack-exchange-preferences
|
A random variable is a variable whose value depends on unknown events. We can summarize the unknown events as "state", and then the random variable is a function of the state.
Example:
Suppose we have three dice rolls ($D_{1}$,$D_{2}$,$D_{3}$). Then the state $S=(D_{1},D_{2},D_{3})$.
One random variable $X$ is the number of 5s. This is:
$$ X=(D_{1}=5?)+(D_{2}=5?)+(D_{3}=5?)$$
Another random variable $Y$ is the sum of the dice rolls. This is:
$$ Y=D_{1}+D_{2}+D_{3} $$
|
What are the main differences between performing principal component analysis (PCA) on the correlation matrix and on the covariance matrix? Do they give the same results?
|
[
"https://stats.stackexchange.com/questions/53",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/17/"
] | 8
|
HuggingFaceH4/stack-exchange-preferences
|
You tend to use the covariance matrix when the variable scales are similar and the correlation matrix when variables are on different scales.
Using the correlation matrix is equivalent to standardizing each of the variables (to mean 0 and standard deviation 1). In general, PCA with and without standardizing will give different results. Especially when the scales are different.
As an example, take a look at this R heptathlon data set. Some of the variables have an average value of about 1.8 (the high jump), whereas other variables (run 800m) are around 120.
library(HSAUR)
heptathlon[,-8] # look at heptathlon data (excluding 'score' variable)
This outputs:
hurdles highjump shot run200m longjump javelin run800m
Joyner-Kersee (USA) 12.69 1.86 15.80 22.56 7.27 45.66 128.51
John (GDR) 12.85 1.80 16.23 23.65 6.71 42.56 126.12
Behmer (GDR) 13.20 1.83 14.20 23.10 6.68 44.54 124.20
Sablovskaite (URS) 13.61 1.80 15.23 23.92 6.25 42.78 132.24
Choubenkova (URS) 13.51 1.74 14.76 23.93 6.32 47.46 127.90
...
Now let's do PCA on covariance and on correlation:
# scale=T bases the PCA on the correlation matrix
hep.PC.cor = prcomp(heptathlon[,-8], scale=TRUE)
hep.PC.cov = prcomp(heptathlon[,-8], scale=FALSE)
biplot(hep.PC.cov)
biplot(hep.PC.cor)
Notice that PCA on covariance is dominated by run800m and javelin: PC1 is almost equal to run800m (and explains $82\%$ of the variance) and PC2 is almost equal to javelin (together they explain $97\%$). PCA on correlation is much more informative and reveals some structure in the data and relationships between variables (but note that the explained variances drop to $64\%$ and $71\%$).
Notice also that the outlying individuals (in this data set) are outliers regardless of whether the covariance or correlation matrix is used.
|
As I understand UK Schools teach that the Standard Deviation is found using:
whereas US Schools teach:
(at a basic level anyway).
This has caused a number of my students problems in the past as they have searched on the Internet, but found the wrong explanation.
Why the difference?
With simple datasets say 10 values, what degree of error will there be if the wrong method is applied (eg in an exam)?
|
[
"https://stats.stackexchange.com/questions/54",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/55/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
The first formula is the population standard deviation and the second formula is the the sample standard deviation. The second formula is also related to the unbiased estimator of the variance - see wikipedia for further details.
I suppose (here) in the UK they don't make the distinction between sample and population at high school. They certainly don't touch concepts such as biased estimators.
|
What is the back-propagation algorithm and how does it work?
|
[
"https://stats.stackexchange.com/questions/58",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/68/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
The back propagation algorithm is a gradient descent algorithm for fitting a neural network model. (as mentionned by @Dikran) Let me explain how.
Formally: Using the calculation of the gradient at the end of this post within equation [1] below (that is a definition of the gradient descent) gives the back propagation algorithm as a particular case of the use of a gradient descent.
A neural network model
Formally, we fix ideas with a simple single layer model:
$$ f(x)=g(A^1(s(A^2(x)))) $$
where $g:\mathbb{R} \rightarrow \mathbb{R}$ and $s:\mathbb{R}^M\rightarrow \mathbb{R}^M$ are known with for all $m=1\dots,M$, $s(x)[m]=\sigma(x[m])$, and $A^1:\mathbb{R}^M\rightarrow \mathbb{R}$, $A^2\mathbb{R}^p\rightarrow \mathbb{R}^M$ are unknown affine functions. The function $\sigma:\mathbb{R}\rightarrow \mathbb{R}$ is called activation function in the framework of classification.
A quadratic Loss function is taken to fix ideas.
Hence the input $(x_1,\dots,x_n)$ vectors of $\mathbb{R}^p$ can be fitted to the real output $(y_1,\dots,y_n)$ of $\mathbb{R}$ (could be vectors) by minimizing the empirical loss:
$$\mathcal{R}_n(A^1,A^2)=\sum_{i=1}^n (y_i-f(x_i))^2\;\;\;\;\;\;\; [1]$$
with respect to the choice of $A^1$ and $A^2$.
Gradient descent
A grandient descent for minimizing $\mathcal{R}$ is an algorithm that iterate:
$$\mathbf{a}_{l+1}=\mathbf{a}_l-\gamma_l \nabla \mathcal{R}(\mathbf{a}_l),\ l \ge 0.$$
for well chosen step sizes $(\gamma_l)_l$ (also called learning rate in the framework of back propagation). It requires the calculation of the gradient of $\mathcal{R}$. In the considered case $\mathbf{a}_l=(A^1_{l},A^2_{l})$.
Gradient of $\mathcal{R}$ (for the simple considered neural net model)
Let us denote, by $\nabla_1 \mathcal{R}$ the gradient of $\mathcal{R}$ as a function of $A^1$, and $\nabla_2\mathcal{R}$ the gradient of $\mathcal{R}$ as a function of $A^2$. Standard calculation (using the rule for derivation of composition of functions) and the use of the notation $z_i=A^1(s(A^2(x_i)))$ give
$$\nabla_1 \mathcal{R}[1:M] =-2\times \sum_{i=1}^n z_i g'(z_i) (y_i-f(x_i))$$
for all $m=1,\dots,M$
$$\nabla_2 \mathcal{R}[1:p,m] =-2\times \sum_{i=1}^n x_i g'(z_i) z_i[m]\sigma'(A^2(x_i)[m]) (y_i-f(x_i))$$
Here I used the R notation: $x[a:b]$ is the vector composed of the coordinates of $x$ from index $a$ to index $b$.
|
With the recent FIFA world cup, I decided to have some fun and determine which months produced world cup football players. Turned out, most footballers in the 2010 world cup were born in the first half of the year.
Someone pointed out, that children born in the first half of the year had a physical advantage over others and hence "survivorship bias" was involved in the equation. Is this an accurate observation? Can someone please explain why he says that?
Also, when trying to understand the concept, I found most examples revolved around the financial sector. Are they any other everyday life examples explaining it?
|
[
"https://stats.stackexchange.com/questions/62",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/58/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
The basic idea behind this is that football clubs have an age cut-off when determining teams. In the league my children participate in the age restrictions states that children born after July 31st are placed on the younger team. This means that two children that are effectively the same age can be playing with two different age groups. The child born July 31st will be playing on the older team and theoretically be the youngest and smallest on the team and in the league. The child born on August 1st will be the oldest and largest child in the league and will be able to benefit from that.
The survivorship bias comes because competitive leagues will select the best players for their teams. The best players in childhood are often the older players since they have additional time for their bodies to mature. This means that otherwise acceptable younger players are not selected simply because of their age. Since they are not given the same opportunities as the older kids, they don’t develop the same skills and eventually drop out of competitive soccer.
If the cut-off for competitive soccer in enough countries is January 1st, that would support the phenomena you see. A similar phenomena has been observed in several other sports including baseball and ice hockey.
|
Duplicate thread: I just installed the latest version of R. What packages should I obtain?
What are the R packages you couldn't imagine your daily work with data?
Please list both general and specific tools.
UPDATE:
As for 24.10.10 ggplot2 seems to be the winer with 7 votes.
Other packages mentioned more than one are:
plyr - 4
RODBC, RMySQL - 4
sqldf - 3
lattice - 2
zoo - 2
Hmisc/rms - 2
Rcurl - 2
XML - 2
Thanks all for your answers!
|
[
"https://stats.stackexchange.com/questions/73",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/22/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
I use plyr and ggplot2 the most on a daily basis.
I also rely heavily on time series packages; most especially, the zoo package.
|
I'm using R and the manuals on the R site are really informative. However, I'd like to see some more examples and implementations with R which can help me develop my knowledge faster. Any suggestions?
|
[
"https://stats.stackexchange.com/questions/75",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/69/"
] | 4
|
HuggingFaceH4/stack-exchange-preferences
|
Quick R site is basic, but quite nice for start http://www.statmethods.net/index.html .
|
I have some ordinal data gained from survey questions. In my case they are Likert style responses (Strongly Disagree-Disagree-Neutral-Agree-Strongly Agree). In my data they are coded as 1-5.
I don't think means would mean much here, so what basic summary statistics are considered usefull?
|
[
"https://stats.stackexchange.com/questions/97",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/114/"
] | 6
|
HuggingFaceH4/stack-exchange-preferences
|
A frequency table is a good place to start. You can do the count, and relative frequency for each level. Also, the total count, and number of missing values may be of use.
You can also use a contingency table to compare two variables at once. Can display using a mosaic plot too.
|
I'd like to see the answer with qualitative view on the problem, not just definition. Examples and analogous from other areas of applied math also would be good.
I understand, my question is silly, but I can't find good and intuitive introduction textbook on signal processing — if someone would suggest one, I will be happy.
|
[
"https://stats.stackexchange.com/questions/100",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/117/"
] | 4
|
HuggingFaceH4/stack-exchange-preferences
|
It depends on where you apply the window function. If you do it in the time domain, it's because you only want to analyze the periodic behavior of the function in a short duration. You do this when you don't believe that your data is from a stationary process.
If you do it in the frequency domain, then you do it to isolate a specific set of frequencies for further analysis; you do this when you believe that (for instance) high-frequency components are spurious.
The first three chapters of "A Wavelet Tour of Signal Processing" by Stephane Mallat have an excellent introduction to signal processing in general, and chapter 4 goes into a very good discussion of windowing and time-frequency representations in both continuous and discrete time, along with a few worked-out examples.
|
What is the best blog on data visualization?
I'm making this question a community wiki since it is highly subjective. Please limit each answer to one link.
Please note the following criteria for proposed answers:
[A]cceptable answers to questions like this ...need to supply adequate descriptions and reasoned justification. A mere hyperlink doesn't do it. ...[A]ny future replies [must] meet ...[these] standards; otherwise, they will be deleted without further comment.
|
[
"https://stats.stackexchange.com/questions/103",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/5/"
] | 4
|
HuggingFaceH4/stack-exchange-preferences
|
FlowingData | Data Visualization, Infographics, and Statistics
|
What statistical research blogs would you recommend, and why?
|
[
"https://stats.stackexchange.com/questions/114",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
http://www.r-bloggers.com/ is an aggregated blog from lots of blogs that talk about statistics using R, and the #rstats hashtag on twitter is also helpful. I write quite a bit about statistics and R in genetics research.
|
In the definition of standard deviation, why do we have to square the difference from the mean to get the mean (E) and take the square root back at the end? Can't we just simply take the absolute value of the difference instead and get the expected value (mean) of those, and wouldn't that also show the variation of the data? The number is going to be different from square method (the absolute-value method will be smaller), but it should still show the spread of data. Anybody know why we take this square approach as a standard?
The definition of standard deviation:
$\sigma = \sqrt{E\left[\left(X - \mu\right)^2\right]}.$
Can't we just take the absolute value instead and still be a good measurement?
$\sigma = E\left[|X - \mu|\right]$
|
[
"https://stats.stackexchange.com/questions/118",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/83/"
] | 9
|
HuggingFaceH4/stack-exchange-preferences
|
If the goal of the standard deviation is to summarise the spread of a symmetrical data set (i.e. in general how far each datum is from the mean), then we need a good method of defining how to measure that spread.
The benefits of squaring include:
Squaring always gives a non-negative value, so the sum will always be zero or higher.
Squaring emphasizes larger differences, a feature that turns out to be both good and bad (think of the effect outliers have).
Squaring however does have a problem as a measure of spread and that is that the units are all squared, whereas we might prefer the spread to be in the same units as the original data (think of squared pounds, squared dollars, or squared apples). Hence the square root allows us to return to the original units.
I suppose you could say that absolute difference assigns equal weight to the spread of data whereas squaring emphasises the extremes. Technically though, as others have pointed out, squaring makes the algebra much easier to work with and offers properties that the absolute method does not (for example, the variance is equal to the expected value of the square of the distribution minus the square of the mean of the distribution)
It is important to note however that there's no reason you couldn't take the absolute difference if that is your preference on how you wish to view 'spread' (sort of how some people see 5% as some magical threshold for $p$-values, when in fact it is situation dependent). Indeed, there are in fact several competing methods for measuring spread.
My view is to use the squared values because I like to think of how it relates to the Pythagorean Theorem of Statistics: $c = \sqrt{a^2 + b^2}$ …this also helps me remember that when working with independent random variables, variances add, standard deviations don't. But that's just my personal subjective preference which I mostly only use as a memory aid, feel free to ignore this paragraph.
An interesting analysis can be read here:
Revisiting a 90-year-old debate: the advantages of the mean deviation - Stephen Gorard (Department of Educational Studies, University of York); Paper presented at the British Educational Research Association Annual Conference, University of Manchester, 16-18 September 2004
|
I'm a programmer without statistical background, and I'm currently looking at different classification methods for a large number of different documents that I want to classify into pre-defined categories. I've been reading about kNN, SVM and NN. However, I have some trouble getting started. What resources do you recommend? I do know single variable and multi variable calculus quite well, so my math should be strong enough. I also own Bishop's book on Neural Networks, but it has proven to be a bit dense as an introduction.
|
[
"https://stats.stackexchange.com/questions/124",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/131/"
] | 4
|
HuggingFaceH4/stack-exchange-preferences
|
I recommend these books - they are highly rated on Amazon too:
"Text Mining" by Weiss
"Text Mining Application Programming", by Konchady
For software, I recommend RapidMiner (with the text plugin), free and open-source.
This is my "text mining process":
collect the documents (usually a web crawl)
[sample if too large]
timestamp
strip out markup
tokenize: break into characters, words, n-grams, or sliding windows
stemming (aka lemmatization)
[include synonyms]
see porter or snowflake algorithm
pronouns and articles are usually bad predictors
remove stopwords
feature vectorization
binary (appears or doesn’t)
word count
relative frequency: tf-idf
information gain, chi square
[have a minimum value for inclusion]
weighting
weight words at top of document higher?
Then you can start the work of classifying them. kNN, SVM, or Naive Bayes as appropriate.
You can see my series of text mining videos here
|
Which is the best introductory textbook for Bayesian statistics?
One book per answer, please.
|
[
"https://stats.stackexchange.com/questions/125",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/5/"
] | 7
|
HuggingFaceH4/stack-exchange-preferences
|
John Kruschke released a book in mid 2011 called Doing Bayesian Data Analysis: A Tutorial with R and BUGS. (A second edition was released in Nov 2014: Doing Bayesian Data Analysis, Second Edition: A Tutorial with R, JAGS, and Stan.) It is truly introductory. If you want to walk from frequentist stats into Bayes though, especially with multilevel modelling, I recommend Gelman and Hill.
John Kruschke also has a website for the book that has all the examples in the book in BUGS and JAGS. His blog on Bayesian statistics also links in with the book.
|
In Plain English, how does one interpret a Bland-Altman plot?
What are the advantages of using a Bland-Altman plot over other methods of comparing two different measurement methods?
|
[
"https://stats.stackexchange.com/questions/128",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/132/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
The Bland-Altman plot is more widely known as the Tukey Mean-Difference Plot (one of many charts devised by John Tukey http://en.wikipedia.org/wiki/John_Tukey).
The idea is that x-axis is the mean of your two measurements, which is your best guess as to the "correct" result and the y-axis is the difference between the two measurement differences. The chart can then highlight certain types of anomalies in the measurements. For example, if one method always gives too high a result, then you'll get all of your points above or all below the zero line. It can also reveal, for example, that one method over-estimates high values and under-estimates low values.
If you see the points on the Bland-Altman plot scattered all over the place, above and below zero, then the suggests that there is no consistent bias of one approach versus the other (of course, there could be hidden biases that this plot does not show up).
Essentially, it is a good first step for exploring the data. Other techniques can be used to dig into more particular sorts of behaviour of the measurements.
|
I had a plan of learning R in the near future. Reading another question I found out about Clojure. Now I don't know what to do.
I think a big advantage of R for me is that some people in Economics use it, including one of my supervisors (though the other said: stay away from R!). One advantage of Clojure is that it is Lisp-based, and as I have started learning Emacs and I am keen on writing my own customisations, it would be helpful (yeah, I know Clojure and Elisp are different dialects of Lisp, but they are both Lisp and thus similar I would imagine).
I can't ask which one is better, because I know this is very personal, but could someone give me the advantages (or advantages) of Clojure x R, especially in practical terms? For example, which one should be easier to learn, which one is more flexible or more powerful, which one has more libraries, more support, more users, etc?
My intended use: The bulk of my estimation should be done using Matlab, so I am not looking for anything too deep in terms of statistical analysis, but rather a software to substitute Excel for the initial data manipulation and visualisation, summary statistics and charting, but also some basic statistical analysis or the initial attempts at my estimation.
|
[
"https://stats.stackexchange.com/questions/130",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/90/"
] | 6
|
HuggingFaceH4/stack-exchange-preferences
|
Let me start by saying that I love both languages: you can't go wrong with either, and they are certainly better than something like C++ or Java for doing data analysis.
For basic data analysis I would suggest R (especially with plyr). IMO, R is a little easier to learn than Clojure, although this isn't completely obvious since Clojure is based on Lisp and there are numerous fantastic Lisp resources available (such as SICP). There are less keywords in Clojure, but the libraries are much more difficult to install and work with. Also, keep in mind that R (or S) is largely derived from Scheme, so you would benefit from Lisp knowledge when using it.
In general:
The main advantage of R is the community on CRAN (over 2461 packages and counting). Nothing will compare with this in the near future, not even a commercial application like matlab.
Clojure has the big advantage of running on the JVM which means that it can use any Java based library immediately.
I would add that I gave a talk relating Clojure/Incanter to R a while ago, so you may find it of interest. In my experience around creating this, Clojure was generally slower than R for simple operations.
|
On smaller window sizes, n log n sorting might work. Are there any better algorithms to achieve this?
|
[
"https://stats.stackexchange.com/questions/134",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/138/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
#Edit:
As @Hunaphu's points out (and @whuber below in his answer) the original answer I gave to the OP (below) is wrong. It is indeed quicker to first sort the initial batch and then keep updating the median up or down (depending on whether a new data points falls to the left or to the right of the current median).
It's bad form to sort an array to compute a median. Medians (and other quantiles) are typically computed using the quickselect algorithm, with $O(n)$ complexity.
You may also want to look at my answer to a recent related question here.
|
I'm interested in learning R on the cheap. What's the best free resource/book/tutorial for learning R?
|
[
"https://stats.stackexchange.com/questions/138",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/142/"
] | 7
|
HuggingFaceH4/stack-exchange-preferences
|
Some useful R links (find out the link that suits you):
Intro:
for R basics http://cran.r-project.org/doc/contrib/usingR.pdf
for data manipulation http://had.co.nz/plyr/plyr-intro-090510.pdf
http://portal.stats.ox.ac.uk/userdata/ruth/APTS2012/APTS.html
Interactive intro to R programming language https://www.datacamp.com/courses/introduction-to-r
Application focused R tutorial https://www.teamleada.com/tutorials/introduction-to-statistical-programming-in-r
In-browser learning for R http://tryr.codeschool.com/
with a focus on economics:
lecture notes with R code http://www.econ.uiuc.edu/~econ472/e-Tutorial.html
A brief guide to R and Economics http://people.su.se/~ma/R_intro/R_intro.pdf
Graphics: plots, maps, etc.:
tutorial with info on plots http://cran.r-project.org/doc/contrib/Rossiter-RIntro-ITC.pdf
a graph gallery of R plots and charts with supporting code http://addictedtor.free.fr/graphiques/
A tutorial for Lattice http://osiris.sunderland.ac.uk/~cs0her/Statistics/UsingLatticeGraphicsInR.htm
Ggplot R graphics http://had.co.nz/ggplot2/
Ggplot Vs Lattice @ http://had.co.nz/ggplot/vs-lattice.html
Multiple tutorials for using ggplot2 and Lattice http://learnr.wordpress.com/tag/ggplot2/
Google Charts with R http://www.iq.harvard.edu/blog/sss/archives/2008/04/google_charts_f_1.shtml
Introduction to using RGoogleMaps @ http://cran.r-project.org/web/packages/RgoogleMaps/vignettes/RgoogleMaps-intro.pdf
Thematic Maps with R https://stackoverflow.com/questions/1260965/developing-geographic-thematic-maps-with-r
geographic maps in R http://smartdatacollective.com/Home/22052
GUIs:
Poor Man GUI for R http://wiener.math.csi.cuny.edu/pmg/
R Commander is a robust GUI for R http://socserv.mcmaster.ca/jfox/Misc/Rcmdr/installation-notes.html
JGR is a Java-based GUI for R http://jgr.markushelbig.org/Screenshots.html
Time series & finance:
a good beginner’s tutorial for Time Series http://www.stat.pitt.edu/stoffer/tsa2/index.html
Interesting time series packages in R http://robjhyndman.com/software
advanced time series in R http://www.wise.xmu.edu.cn/2007summerworkshop/download/Advanced%20Topics%20in%20Time%20Series%20Econometrics%20Using%20R1_ZongwuCAI.pdf
provides a great analysis and visualization framework for quantitative trading http://www.quantmod.com/
Guide to Credit Scoring using R http://cran.r-project.org/doc/contrib/Sharma-CreditScoring.pdf
an Open Source framework for Financial Analysis http://www.rmetrics.org/
Data / text mining:
A Data Mining tool in R http://rattle.togaware.com/
An online e-book for Data Mining with R http://www.liaad.up.pt/~ltorgo/DataMiningWithR/
Introduction to the Text Mining package in R http://cran.r-project.org/web/packages/tm/vignettes/tm.pdf
Other statistical techniques:
Quick-R http://www.statmethods.net/
annotated guides for a variety of models http://www.ats.ucla.edu/stat/r/dae/default.htm
Social Network Analysis http://www.r-project.org/conferences/useR-2008/slides/Bojanowski.pdf
Editors:
Komodo Edit R editor http://www.sciviews.org/SciViews-K/index.html
Tinn-R makes for a good R editor http://www.sciviews.org/Tinn-R/
An Eclipse plugin for R @ http://www.walware.de/goto/statet
Instructions to install StatET in Eclipse http://www.splusbook.com/Rintro/R_Eclipse_StatET.pdf
RStudio http://rstudio.org/
Emacs Speaks Statistics, a statistical language package for Emacs http://ess.r-project.org/
Interfacing w/ other languages / software:
to embed R data frames in Excel via multiple approaches http://learnr.wordpress.com/2009/10/06/export-data-frames-to-multi-worksheet-excel-file/
provides a tool to make R usable from Excel http://www.statconn.com/
Connect to MySQL from R http://erikvold.com/blog/index.cfm/2008/8/20/how-to-connect-to-mysql-with-r-in-wndows-using-rmysql
info about pulling data from SAS, STATA, SPSS, etc. http://www.statmethods.net/input/importingdata.html
Latex http://www.stat.uni-muenchen.de/~leisch/Sweave/
R2HTML http://www.feferraz.net/en/P/R2HTML
Blogs, newsletters, etc.:
A very informative blog http://blog.revolutionanalytics.com/
A blog aggregator for posts about R http://www.r-bloggers.com/
R mailing lists http://www.r-project.org/mail.html
R newsletter (old) http://cran.r-project.org/doc/Rnews/
R journal (current) http://journal.r-project.org/
Other / uncategorized: (as of yet)
Web Scraping in R http://www.programmingr.com/content/webscraping-using-readlines-and-rcurl
a very interesting list of packages that is seriously worth a look http://www.omegahat.org/
Commercial versions of R @ http://www.revolutionanalytics.com/
Red R for R tasks http://code.google.com/p/r-orange/
KNIME for R (worth a serious look) http://www.knime.org/introduction/screenshots
R Tutorial for Titanic https://statsguys.wordpress.com/
|
Possible Duplicate:
Locating freely available data samples
Where can I find freely accessible data sources?
I'm thinking of sites like
http://www2.census.gov/census_2000/datasets/?
|
[
"https://stats.stackexchange.com/questions/145",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/138/"
] | 4
|
HuggingFaceH4/stack-exchange-preferences
|
Amazon has free Public Data sets for use with EC2.
http://aws.amazon.com/publicdatasets/
Here's a list: http://developer.amazonwebservices.com/connect/kbcategory.jspa?categoryID=243
|
A while ago a user on R-help mailing list asked about the soundness of using PCA scores in a regression. The user is trying to use some PC scores to explain variation in another PC (see full discussion here). The answer was that no, this is not sound because PCs are orthogonal to each other.
Can someone explain in a bit more detail why this is so?
|
[
"https://stats.stackexchange.com/questions/146",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/144/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
A principal component is a weighted linear combination of all your factors (X's).
example: PC1 = 0.1X1 + 0.3X2
There will be one component for each factor (though in general a small number are selected).
The components are created such that they have zero correlation (are orthogonal), by design.
Therefore, component PC1 should not explain any variation in component PC2.
You may want to do regression on your Y variable and the PCA representation of your X's, as they will not have multi-collinearity. However, this could be hard to interpret.
If you have more X's than observations, which breaks OLS, you can regress on your components, and simply select a smaller number of the highest variation components.
Principal Component Analysis by Jollife a very in-depth and highly cited book on the subject
This is also good: http://www.statsoft.com/textbook/principal-components-factor-analysis/
|
Label switching (i.e., the posterior distribution is invariant to switching component labels) is a problematic issue when using MCMC to estimate mixture models.
Is there a standard (as in widely accepted) methodology to deal with the issue?
If there is no standard approach then what are the pros and cons of the leading approaches to solve the label switching problem?
|
[
"https://stats.stackexchange.com/questions/152",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/-1/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
There is a nice and reasonably recent discussion of this problem here:
Christian P. Robert Multimodality and label switching: a
discussion. Workshop on mixtures, ICMS March 3, 2010.
Essentially, there are several standard strategies, and each has pros and cons. The most obvious thing to do is to formulate the prior in such a way as to ensure there is only one posterior mode (eg. order the means of the mixuture components), but this turns out to have a strange effect on the posterior, and therefore isn't generally used. Next is to ignore the problem during sampling, and then post-process the output to re-label the components to keep the labels consistent. This is easy to implement and seems to work OK. The more sophisticated approaches re-label on-line, either by keeping a single mode, or deliberately randomly permuting the labels to ensure mixing over multiple modes. I quite like the latter approach, but this still leaves the problem of how to summarise the output meaningfully. However, I see that as a separate problem.
|
I really enjoy hearing simple explanations to complex problems. What is your favorite analogy or anecdote that explains a difficult statistical concept?
My favorite is Murray's explanation of cointegration using a drunkard and her dog. Murray explains how two random processes (a wandering drunk and her dog, Oliver) can have unit roots but still be related (cointegrated) since their joint first differences are stationary.
The drunk sets out from the bar, about to wander aimlessly in random-walk fashion. But
periodically she intones "Oliver, where are you?", and Oliver interrupts his aimless
wandering to bark. He hears her; she hears him. He thinks, "Oh, I can't let her get too far
off; she'll lock me out." She thinks, "Oh, I can't let him get too far off; he'll wake
me up in the middle of the night with his barking." Each assesses how far
away the other is and moves to partially close that gap.
|
[
"https://stats.stackexchange.com/questions/155",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/154/"
] | 4
|
HuggingFaceH4/stack-exchange-preferences
|
If you carved your distribution (histogram) out
of wood, and tried to balance it on
your finger, the balance point would
be the mean, no matter the shape of the distribution.
If you put a stick in the middle of
your scatter plot, and attached the
stick to each data point with a
spring, the resting point of the
stick would be your regression line. [1]
[1] this would technically be principal components regression. you would have to force the springs to move only "vertically" to be least squares, but the example is illustrative either way.
|
Econometricians often talk about a time series being integrated with order k, I(k). k being the minimum number of differences required to obtain a stationary time series.
What methods or statistical tests can be used to determine, given a level of confidence, the order of integration of a time series?
|
[
"https://stats.stackexchange.com/questions/161",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/154/"
] | 4
|
HuggingFaceH4/stack-exchange-preferences
|
There are a number of statistical tests (known as "unit root tests") for dealing with this problem. The most popular is probably the "Augmented Dickey-Fuller" (ADF) test, although the Phillips-Perron (PP) test and the KPSS test are also widely used.
Both the ADF and PP tests are based on a null hypothesis of a unit root (i.e., an I(1) series). The KPSS test is based on a null hypothesis of stationarity (i.e., an I(0) series). Consequently, the KPSS test can give quite different results from the ADF or PP tests.
|
Maybe the concept, why it's used, and an example.
|
[
"https://stats.stackexchange.com/questions/165",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/74/"
] | 9
|
HuggingFaceH4/stack-exchange-preferences
|
First, we need to understand what is a Markov chain. Consider the following weather example from Wikipedia. Suppose that weather on any given day can be classified into two states only: sunny and rainy. Based on past experience, we know the following:
$P(\text{Next day is Sunny}\,\vert \,\text{Given today is Rainy)}=0.50$
Since, the next day's weather is either sunny or rainy it follows that:
$P(\text{Next day is Rainy}\,\vert \,\text{Given today is Rainy)}=0.50$
Similarly, let:
$P(\text{Next day is Rainy}\,\vert \,\text{Given today is Sunny)}=0.10$
Therefore, it follows that:
$P(\text{Next day is Sunny}\,\vert \,\text{Given today is Sunny)}=0.90$
The above four numbers can be compactly represented as a transition matrix which represents the probabilities of the weather moving from one state to another state as follows:
$P = \begin{bmatrix}
& S & R \\
S& 0.9 & 0.1 \\
R& 0.5 & 0.5
\end{bmatrix}$
We might ask several questions whose answers follow:
Q1: If the weather is sunny today then what is the weather likely to be tomorrow?
A1: Since, we do not know what is going to happen for sure, the best we can say is that there is a $90\%$ chance that it is likely to be sunny and $10\%$ that it will be rainy.
Q2: What about two days from today?
A2: One day prediction: $90\%$ sunny, $10\%$ rainy. Therefore, two days from now:
First day it can be sunny and the next day also it can be sunny. Chances of this happening are: $0.9 \times 0.9$.
Or
First day it can be rainy and second day it can be sunny. Chances of this happening are: $0.1 \times 0.5$.
Therefore, the probability that the weather will be sunny in two days is:
$P(\text{Sunny 2 days from now} = 0.9 \times 0.9 + 0.1 \times 0.5 = 0.81 + 0.05 = 0.86$
Similarly, the probability that it will be rainy is:
$P(\text{Rainy 2 days from now} = 0.1 \times 0.5 + 0.9 \times 0.1 = 0.05 + 0.09 = 0.14$
In linear algebra (transition matrices) these calculations correspond to all the permutations in transitions from one step to the next (sunny-to-sunny ($S_2S$), sunny-to-rainy ($S_2R$), rainy-to-sunny ($R_2S$) or rainy-to-rainy ($R_2R$)) with their calculated probabilities:
On the lower part of the image we see how to calculate the probability of a future state ($t+1$ or $t+2$) given the probabilities (probability mass function, $PMF$) for every state (sunny or rainy) at time zero (now or $t_0$) as simple matrix multiplication.
If you keep forecasting weather like this you will notice that eventually the $n$-th day forecast, where $n$ is very large (say $30$), settles to the following 'equilibrium' probabilities:
$P(\text{Sunny}) = 0.833$
and
$P(\text{Rainy}) = 0.167$
In other words, your forecast for the $n$-th day and the $n+1$-th day remain the same. In addition, you can also check that the 'equilibrium' probabilities do not depend on the weather today. You would get the same forecast for the weather if you start of by assuming that the weather today is sunny or rainy.
The above example will only work if the state transition probabilities satisfy several conditions which I will not discuss here. But, notice the following features of this 'nice' Markov chain (nice = transition probabilities satisfy conditions):
Irrespective of the initial starting state we will eventually reach an equilibrium probability distribution of states.
Markov Chain Monte Carlo exploits the above feature as follows:
We want to generate random draws from a target distribution. We then identify a way to construct a 'nice' Markov chain such that its equilibrium probability distribution is our target distribution.
If we can construct such a chain then we arbitrarily start from some point and iterate the Markov chain many times (like how we forecast the weather $n$ times). Eventually, the draws we generate would appear as if they are coming from our target distribution.
We then approximate the quantities of interest (e.g. mean) by taking the sample average of the draws after discarding a few initial draws which is the Monte Carlo component.
There are several ways to construct 'nice' Markov chains (e.g., Gibbs sampler, Metropolis-Hastings algorithm).
|
Australia is currently having an election and understandably the media reports new political poll results daily. In a country of 22 million what percentage of the population would need to be sampled to get a statistically valid result?
Is it possible that using too large a sample could affect the results, or does statistical validity monotonically increase with sample size?
|
[
"https://stats.stackexchange.com/questions/166",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/154/"
] | 4
|
HuggingFaceH4/stack-exchange-preferences
|
Sample size doesn't much depend on the population size, which is counter-intuitive to many.
Most polling companies use 400 or 1000 people in their samples.
There is a reason for this:
A sample size of 400 will give you a confidence interval of +/-5% 19 times out of 20 (95%)
A sample size of 1000 will give you a confidence interval of +/-3% 19 times out of 20 (95%)
When you are measuring a proportion near 50% anyways.
This calculator isn't bad:
http://www.raosoft.com/samplesize.html
|
For univariate kernel density estimators (KDE), I use Silverman's rule for calculating $h$:
\begin{equation}
0.9 \min(sd, IQR/1.34)\times n^{-0.2}
\end{equation}
What are the standard rules for multivariate KDE (assuming a Normal kernel).
|
[
"https://stats.stackexchange.com/questions/168",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
For a univariate KDE, you are better off using something other than Silverman's rule which is based on a normal approximation. One excellent approach is the Sheather-Jones method, easily implemented in R; for example,
plot(density(precip, bw="SJ"))
The situation for multivariate KDE is not so well studied, and the tools are not so mature. Rather than a bandwidth, you need a bandwidth matrix. To simplify the problem, most people assume a diagonal matrix, although this may not lead to the best results. The ks package in R provides some very useful tools including allowing a full (not necessarily diagonal) bandwidth matrix.
|
Are there any free statistical textbooks available?
|
[
"https://stats.stackexchange.com/questions/170",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8/"
] | 7
|
HuggingFaceH4/stack-exchange-preferences
|
Online books include
http://davidmlane.com/hyperstat/
http://vassarstats.net/textbook/
http://www.psychstat.missouristate.edu/multibook2/mlt.htm
http://bookboon.com/uk/student/statistics
http://www.freebookcentre.net/SpecialCat/Free-Statistics-Books-Download.html
Update: I can now add my own forecasting textbook
Forecasting: principles and practice (Hyndman & Athanasopoulos, 2012)
|
I recently started working for a tuberculosis clinic. We meet periodically to discuss the number of TB cases we're currently treating, the number of tests administered, etc. I'd like to start modeling these counts so that we're not just guessing whether something is unusual or not. Unfortunately, I've had very little training in time series, and most of my exposure has been to models for very continuous data (stock prices) or very large numbers of counts (influenza). But we deal with 0-18 cases per month (mean 6.68, median 7, var 12.3), which are distributed like this:
[image lost to the mists of time]
[image eaten by a grue]
I've found a few articles that address models like this, but I'd greatly appreciate hearing suggestions from you - both for approaches and for R packages that I could use to implement those approaches.
EDIT: mbq's answer has forced me to think more carefully about what I'm asking here; I got too hung-up on the monthly counts and lost the actual focus of the question. What I'd like to know is: does the (fairly visible) decline from, say, 2008 onward reflect a downward trend in the overall number of cases? It looks to me like the number of cases monthly from 2001-2007 reflects a stable process; maybe some seasonality, but overall stable. From 2008 through the present, it looks like that process is changing: the overall number of cases is declining, even though the monthly counts might wobble up and down due to randomness and seasonality. How can I test if there's a real change in the process? And if I can identify a decline, how could I use that trend and whatever seasonality there might be to estimate the number of cases we might see in the upcoming months?
|
[
"https://stats.stackexchange.com/questions/173",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/71/"
] | 4
|
HuggingFaceH4/stack-exchange-preferences
|
To assess the historical trend, I'd use a gam with trend and seasonal components. For example
require(mgcv)
require(forecast)
x <- ts(rpois(100,1+sin(seq(0,3*pi,l=100))),f=12)
tt <- 1:100
season <- seasonaldummy(x)
fit <- gam(x ~ s(tt,k=5) + season, family="poisson")
plot(fit)
Then summary(fit) will give you a test of significance of the change in trend and the plot will give you some confidence intervals. The assumptions here are that the observations are independent and the conditional distribution is Poisson. Because the mean is allowed to change smoothly over time, these are not particularly strong assumptions.
To forecast is more difficult as you need to project the trend into the future. If you are willing to accept a linear extrapolation of the trend at the end of the data (which is certainly dodgy but probably ok for a few months), then use
fcast <- predict(fit,se.fit=TRUE,
newdata=list(tt=101:112,season=seasonaldummyf(x,h=12)))
To see the forecasts on the same graph:
plot(x,xlim=c(0,10.5))
lines(ts(exp(fcast$fit),f=12,s=112/12),col=2)
lines(ts(exp(fcast$fit-2*fcast$se),f=12,s=112/12),col=2,lty=2)
lines(ts(exp(fcast$fit+2*fcast$se),f=12,s=112/12),col=2,lty=2)
You can spot the unusual months by looking for outliers in the (deviance) residuals of the fit.
|
Often times a statistical analyst is handed a set dataset and asked to fit a model using a technique such as linear regression. Very frequently the dataset is accompanied with a disclaimer similar to "Oh yeah, we messed up collecting some of these data points -- do what you can".
This situation leads to regression fits that are heavily impacted by the presence of outliers that may be erroneous data. Given the following:
It is dangerous from both a scientific and moral standpoint to throw out data for no reason other than it "makes the fit look bad".
In real life, the people who collected the data are frequently not available to answer questions such as "when generating this data set, which of the points did you mess up, exactly?"
What statistical tests or rules of thumb can be used as a basis for excluding outliers in linear regression analysis?
Are there any special considerations for multilinear regression?
|
[
"https://stats.stackexchange.com/questions/175",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/13/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
Rather than exclude outliers, you can use a robust method of regression. In R, for example, the rlm() function from the MASS package can be used instead of the lm() function. The method of estimation can be tuned to be more or less robust to outliers.
|
Is there a standard and accepted method for selecting the number of layers, and the number of nodes in each layer, in a feed-forward neural network? I'm interested in automated ways of building neural networks.
|
[
"https://stats.stackexchange.com/questions/181",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/159/"
] | 10
|
HuggingFaceH4/stack-exchange-preferences
|
I realize this question has been answered, but I don't think the extant answer really engages the question beyond pointing to a link generally related to the question's subject matter. In particular, the link describes one technique for programmatic network configuration, but that is not a "[a] standard and accepted method" for network configuration.
By following a small set of clear rules, one can programmatically set a competent network architecture (i.e., the number and type of neuronal layers and the number of neurons comprising each layer). Following this schema will give you a competent architecture but probably not an optimal one.
But once this network is initialized, you can iteratively tune the configuration during training using a number of ancillary algorithms; one family of these works by pruning nodes based on (small) values of the weight vector after a certain number of training epochs--in other words, eliminating unnecessary/redundant nodes (more on this below).
So every NN has three types of layers: input, hidden, and output.
Creating the NN architecture, therefore, means coming up with values for the number of layers of each type and the number of nodes in each of these layers.
The Input Layer
Simple--every NN has exactly one of them--no exceptions that I'm aware of.
With respect to the number of neurons comprising this layer, this parameter is completely and uniquely determined once you know the shape of your training data. Specifically, the number of neurons comprising that layer is equal to the number of features (columns) in your data. Some NN configurations add one additional node for a bias term.
The Output Layer
Like the Input layer, every NN has exactly one output layer. Determining its size (number of neurons) is simple; it is completely determined by the chosen model configuration.
Is your NN going to run in Machine Mode or Regression Mode (the ML convention of using a term that is also used in statistics but assigning a different meaning to it is very confusing)? Machine mode: returns a class label (e.g., "Premium Account"/"Basic Account"). Regression Mode returns a value (e.g., price).
If the NN is a regressor, then the output layer has a single node.
If the NN is a classifier, then it also has a single node unless softmax is used
in which case the output layer has one node per class label in your model.
The Hidden Layers
So those few rules set the number of layers and size (neurons/layer) for both the input and output layers. That leaves the hidden layers.
How many hidden layers? Well, if your data is linearly separable (which you often know by the time you begin coding a NN), then you don't need any hidden layers at all. Of course, you don't need an NN to resolve your data either, but it will still do the job.
Beyond that, as you probably know, there's a mountain of commentary on the question of hidden layer configuration in NNs (see the insanely thorough and insightful NN FAQ for an excellent summary of that commentary). One issue within this subject on which there is a consensus is the performance difference from adding additional hidden layers: the situations in which performance improves with a second (or third, etc.) hidden layer are very few. One hidden layer is sufficient for the large majority of problems.
So what about the size of the hidden layer(s)--how many neurons? There are some empirically derived rules of thumb; of these, the most commonly relied on is 'the optimal size of the hidden layer is usually between the size of the input and size of the output layers'. Jeff Heaton, the author of Introduction to Neural Networks in Java, offers a few more.
In sum, for most problems, one could probably get decent performance (even without a second optimization step) by setting the hidden layer configuration using just two rules: (i) the number of hidden layers equals one; and (ii) the number of neurons in that layer is the mean of the neurons in the input and output layers.
Optimization of the Network Configuration
Pruning describes a set of techniques to trim network size (by nodes, not layers) to improve computational performance and sometimes resolution performance. The gist of these techniques is removing nodes from the network during training by identifying those nodes which, if removed from the network, would not noticeably affect network performance (i.e., resolution of the data). (Even without using a formal pruning technique, you can get a rough idea of which nodes are not important by looking at your weight matrix after training; look at weights very close to zero--it's the nodes on either end of those weights that are often removed during pruning.) Obviously, if you use a pruning algorithm during training, then begin with a network configuration that is more likely to have excess (i.e., 'prunable') nodes--in other words, when deciding on network architecture, err on the side of more neurons, if you add a pruning step.
Put another way, by applying a pruning algorithm to your network during training, you can approach optimal network configuration; whether you can do that in a single "up-front" (such as a genetic-algorithm-based algorithm), I don't know, though I do know that for now, this two-step optimization is more common.
|
I am sure that everyone who's trying to find patterns in historical stock market data or betting history would like to know about this. Given a huge sets of data, and thousands of random variables that may or may not affect it, it makes sense to ask any patterns that you extract out from the data are indeed true patterns, not statistical fluke.
A lot of patterns are only valid when they are tested in the samples. And even those that are patterns that are valid out of samples may cease to become valid when you apply it in the real world.
I understand that it is not possible to completely 100% make sure a pattern is valid all the time, but besides in and out of samples tests, are their any tests that could establish the validness of a pattern?
|
[
"https://stats.stackexchange.com/questions/194",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/175/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
If you want to know that a pattern is meaningful, you need to show what it actually means. Statistical tests do not do this. Unless your data can be said to be in some sense "complete", inferences draw from the data will always be provisional.
You can increase your confidence in the validity of a pattern by testing against more and more out of sample data, but that doesn't protect you from it turning out to be an artefact. The broader your range of out of sample data -- eg, in terms of how it is acquired and what sort of systematic confounding factors might exist within it -- the better the validation.
Ideally, though, you need to go beyond identifying patterns and come up with a persuasive theoretical framework that explains the patterns you've found, and then test that by other, independent means. (This is called "science".)
|
Besides gnuplot and ggobi, what open source tools are people using for visualizing multi-dimensional data?
Gnuplot is more or less a basic plotting package.
Ggobi can do a number of nifty things, such as:
animate data along a dimension or among discrete collections
animate linear combinations varying the coefficients
compute principal components and other transformations
visualize and rotate 3 dimensional data clusters
use colors to represent a different dimension
What other useful approaches are based in open source and thus freely reusable or customizable?
Please provide a brief description of the package's abilities in the answer.
|
[
"https://stats.stackexchange.com/questions/196",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/87/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
How about R with ggplot2?
Other tools that I really like:
Processing
Prefuse
Protovis
|
Following on from this question:
Imagine that you want to test for differences in central tendency between two groups (e.g., males and females)
on a 5-point Likert item (e.g., satisfaction with life: Dissatisfied to Satisfied).
I think a t-test would be sufficiently accurate for most purposes,
but that a bootstrap test of differences between group means would often provide more accurate estimate of confidence intervals.
What statistical test would you use?
|
[
"https://stats.stackexchange.com/questions/203",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/183/"
] | 4
|
HuggingFaceH4/stack-exchange-preferences
|
Clason & Dormody discussed the issue of statistical testing for Likert items (Analyzing data measured by individual Likert-type items). I think that a bootstrapped test is ok when the two distributions look similar (bell shaped and equal variance). However, a test for categorical data (e.g. trend or Fisher test, or ordinal logistic regression) would be interesting too since it allows to check for response distribution across the item categories, see Agresti's book on Categorical Data Analysis (Chapter 7 on Logit models for multinomial responses).
Aside from this, you can imagine situations where the t-test or any other non-parametric tests would fail if the response distribution is strongly imbalanced between the two groups. For example, if all people from group A answer 1 or 5 (in equally proportion) whereas all people in group B answer 3, then you end up with identical within-group mean and the test is not meaningful at all, though in this case the homoscedasticity assumption is largely violated.
|
I'm curious about why we treat fitting GLMS as though they were some special optimization problem. Are they? It seems to me that they're just maximum likelihood, and that we write down the likelihood and then ... we maximize it! So why do we use Fisher scoring instead of any of the myriad of optimization schemes that has been developed in the applied math literature?
|
[
"https://stats.stackexchange.com/questions/205",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/187/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
Fisher's scoring is just a version of Newton's method that happens to be identified with GLMs, there's nothing particularly special about it, other than the fact that the Fisher's information matrix happens to be rather easy to find for random variables in the exponential family. It also ties in to a lot of other math-stat material that tends to come up about the same time, and gives a nice geometric intuition about what exactly Fisher information means.
There's absolutely no reason I can think of not to use some other optimizer if you prefer, other than that you might have to code it by hand rather than use a pre-existing package. I suspect that any strong emphasis on Fisher scoring is a combination of (in order of decreasing weight) pedagogy, ease-of-derivation, historical bias, and "not-invented-here" syndrome.
|
What is the difference between discrete data and continuous data?
|
[
"https://stats.stackexchange.com/questions/206",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/188/"
] | 7
|
HuggingFaceH4/stack-exchange-preferences
|
Discrete data can only take particular values. There may potentially be an infinite number of those values, but each is distinct and there's no grey area in between. Discrete data can be numeric -- like numbers of apples -- but it can also be categorical -- like red or blue, or male or female, or good or bad.
Continuous data are not restricted to defined separate values, but can occupy any value over a continuous range. Between any two continuous data values, there may be an infinite number of others. Continuous data are always essentially numeric.
It sometimes makes sense to treat discrete data as continuous and the other way around:
For example, something like height is continuous, but often we
don't really care too much about tiny differences and instead group
heights into a number of discrete bins -- i.e. only measuring
centimetres --.
Conversely, if we're counting large amounts of some discrete entity
-- i.e. grains of rice, or termites, or pennies in the economy -- we may choose not to think of 2,000,006 and 2,000,008 as crucially
different values but instead as nearby points on an approximate
continuum.
It can also sometimes be useful to treat numeric data as categorical, eg: underweight, normal, obese. This is usually just another kind of binning.
It seldom makes sense to consider categorical data as continuous.
|
Suppose I have a large set of multivariate data with at least three variables. How can I find the outliers? Pairwise scatterplots won't work as it is possible for an outlier to exist in 3 dimensions that is not an outlier in any of the 2 dimensional subspaces.
I am not thinking of a regression problem, but of true multivariate data. So answers involving robust regression or computing leverage are not helpful.
One possibility would be to compute the principal component scores and look for an outlier in the bivariate scatterplot of the first two scores. Would that be guaranteed to work? Are there better approaches?
|
[
"https://stats.stackexchange.com/questions/213",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/159/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
I think Robin Girard's answer would work pretty well for 3 and possibly 4 dimensions, but the curse of dimensionality would prevent it working beyond that. However, his suggestion led me to a related approach which is to apply the cross-validated kernel density estimate to the first three principal component scores. Then a very high-dimensional data set can still be handled ok.
In summary, for i=1 to n
Compute a density estimate of the first three principal component scores obtained from the data set without Xi.
Calculate the likelihood of Xi for the density estimated in step 1.
call it Li.
end for
Sort the Li (for i=1,..,n) and the outliers are those with likelihood below some threshold. I'm not sure what would be a good threshold -- I'll leave that for whoever writes the paper on this! One possibility is to do a boxplot of the log(Li) values and see what outliers are detected at the negative end.
|
What are some good visualization libraries for online use? Are they easy to use and is there good documentation?
|
[
"https://stats.stackexchange.com/questions/216",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/191/"
] | 4
|
HuggingFaceH4/stack-exchange-preferences
|
IMO, Protovis is the best and is very well documented and supported. It is the basis for my webvis R package.
These are also very good, although they have more of a learning curve:
Processing
Prefuse
|
If $X_1, ..., X_n$ are independent identically-distributed random variables, what can be said about the distribution of $\min(X_1, ..., X_n)$ in general?
|
[
"https://stats.stackexchange.com/questions/220",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/85/"
] | 7
|
HuggingFaceH4/stack-exchange-preferences
|
If the cdf of $X_i$ is denoted by $F(x)$, then the cdf of the minimum is given by $1-[1-F(x)]^n$.
|
What are principal component scores (PC scores, PCA scores)?
|
[
"https://stats.stackexchange.com/questions/222",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/191/"
] | 7
|
HuggingFaceH4/stack-exchange-preferences
|
First, let's define a score.
John, Mike and Kate get the following percentages for exams in Maths, Science, English and Music as follows:
Maths Science English Music
John 80 85 60 55
Mike 90 85 70 45
Kate 95 80 40 50
In this case there are 12 scores in total. Each score represents the exam results for each person in a particular subject. So a score in this case is simply a representation of where a row and column intersect.
Now let's informally define a Principal Component.
In the table above, can you easily plot the data in a 2D graph? No, because there are four subjects (which means four variables: Maths, Science, English, and Music), i.e.:
You could plot two subjects in the exact same way you would with $x$ and $y$ co-ordinates in a 2D graph.
You could even plot three subjects in the same way you would plot $x$, $y$ and $z$ in a 3D graph (though this is generally bad practice, because some distortion is inevitable in the 2D representation of 3D data).
But how would you plot 4 subjects?
At the moment we have four variables which each represent just one subject. So a method around this might be to somehow combine the subjects into maybe just two new variables which we can then plot. This is known as Multidimensional scaling.
Principal Component analysis is a form of multidimensional scaling. It is a linear transformation of the variables into a lower dimensional space which retain maximal amount of information about the variables. For example, this would mean we could look at the types of subjects each student is maybe more suited to.
A principal component is therefore a combination of the original variables after a linear transformation. In R, this is:
DF <- data.frame(Maths=c(80, 90, 95), Science=c(85, 85, 80),
English=c(60, 70, 40), Music=c(55, 45, 50))
prcomp(DF, scale = FALSE)
Which will give you something like this (first two Principal Components only for sake of simplicity):
PC1 PC2
Maths 0.27795606 0.76772853
Science -0.17428077 -0.08162874
English -0.94200929 0.19632732
Music 0.07060547 -0.60447104
The first column here shows coefficients of linear combination that defines principal component #1, and the second column shows coefficients for principal component #2.
So what is a Principal Component Score?
It's a score from the table at the end of this post (see below).
The above output from R means we can now plot each person's score across all subjects in a 2D graph as follows. First, we need to center the original variables by subtracting column means:
Maths Science English Music
John -8.33 1.66 3.33 5
Mike 1.66 1.66 13.33 -5
Kate 6.66 -3.33 -16.66 0
And then to form linear combinations to get PC1 and PC2 scores:
x y
John -0.28*8.33 + -0.17*1.66 + -0.94*3.33 + 0.07*5 -0.77*8.33 + -0.08*1.66 + 0.19*3.33 + -0.60*5
Mike 0.28*1.66 + -0.17*1.66 + -0.94*13.33 + -0.07*5 0.77*1.66 + -0.08*1.66 + 0.19*13.33 + -0.60*5
Kate 0.28*6.66 + 0.17*3.33 + 0.94*16.66 + 0.07*0 0.77*6.66 + 0.08*3.33 + -0.19*16.66 + -0.60*0
Which simplifies to:
x y
John -5.39 -8.90
Mike -12.74 6.78
Kate 18.13 2.12
There are six principal component scores in the table above. You can now plot the scores in a 2D graph to get a sense of the type of subjects each student is perhaps more suited to.
The same output can be obtained in R by typing prcomp(DF, scale = FALSE)$x.
EDIT 1: Hmm, I probably could have thought up a better example, and there is more to it than what I've put here, but I hope you get the idea.
EDIT 2: full credit to @drpaulbrewer for his comment in improving this answer.
|
Which visualization libraries (plots, graphs, ...) would you suggest to use in a standalone application (Linux, .Net, Windows, whatever). Reasonable performance would be nice as well.
|
[
"https://stats.stackexchange.com/questions/224",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/128/"
] | 4
|
HuggingFaceH4/stack-exchange-preferences
|
The Visualization Tool Kit VTK is pretty impressive for 3D visualizations of numerical data. Unfortunately, it is also pretty low level.
Graphviz is used pretty extensively for visualizing graphs and other tree-like data structures.
igraph can also be used for visualization of tree-like data structures. Contains nice interfaces to scripting languages such as R and Python along with a stand-alone C library.
The NCL (NCAR Command Language) library contains some pretty neat graphing routines- especially if you are looking at spatially distributed, multidimensional data such as wind fields. Which makes sense as NCAR is the National Center for Atmospheric Research.
If you are willing to relax the executable requirement, or try a tool like py2exe, there is the possibility of leveraging some neat Python libraries and applications such as:
MayaVi: A higher level front-end to VTK developed by Enthought.
Chaco: Another Enthought library focused on 2D graphs.
Matplotlib: Another 2D plotting library. Has nice support for TeX-based mathematical annotation.
Basemap: An add-on to Matplotlib for drawing maps and displaying geographic data (sexy examples here).
If we were to bend the concept of "standalone application" even further to include PDF files, there are some neat graphics libraries available to LaTeX users:
Asymptote can generate a variety of graphs, but its crown jewel is definitely the ability to embed 3D graphs into PDF documents that can be manipulated (zoomed, rotated, animated, etc) by anyone using the Adobe Acrobat reader (example).
PGF/TikZ provides a wonderful vector drawing language to TeX documents. The manual is hands-down the most well-written, comprehensive and beautiful piece of documentation I have ever seen in an open source project. PGFPlots provides an abstraction layer for drawing plots. A wondeful showcase can be found at TeXample.
PSTricks served as an inspiration for TikZ and allows users to leverage the power of the PostScript language to create some neat graphics.
And for kicks, there's DISLIN, which has a native interface for Fortran! Not open source or free for commercial use though.
|
Why is the average of the highest value from 100 draws from a normal distribution different from the 98% percentile of the normal distribution? It seems that by definition that they should be the same. But...
Code in R:
NSIM <- 10000
x <- rep(NA,NSIM)
for (i in 1:NSIM)
{
x[i] <- max(rnorm(100))
}
qnorm(.98)
qnorm(.99)
mean(x)
median(x)
hist(x)
I imagine that I'm misunderstanding something about what the maximum of a 100 draws from the normal distribution should be. As is demonstrated by an unexpectedly asymetrical distribution of maximum values.
|
[
"https://stats.stackexchange.com/questions/225",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/196/"
] | 4
|
HuggingFaceH4/stack-exchange-preferences
|
The maximum does not have a normal distribution. Its cdf is $\Phi(x)^{100}$ where $\Phi(x)$ is the standard normal cdf. In general the moments of this distribution are tricky to obtain analytically. There is an ancient paper on this by Tippett (Biometrika, 1925).
|
This is a bit of a flippant question, but I have a serious interest in the answer. I work in a psychiatric hospital and I have three years' of data, collected every day across each ward regarding the level of violence on that ward.
Clearly the model which fits these data is a time series model. I had to difference the scores in order to make them more normal. I fit an ARMA model with the differenced data, and the best fit I think was a model with one degree of differencing and first order auto-correlation at lag 2.
My question is, what on earth can I use this model for? Time series always seems so useful in the textbooks when it's about hare populations and oil prices, but now I've done my own the result seems so abstract as to be completely opaque. The differenced scores correlate with each other at lag two, but I can't really advise everyone to be on high alert two days after a serious incident in all seriousness.
Or can I?
|
[
"https://stats.stackexchange.com/questions/242",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/199/"
] | 4
|
HuggingFaceH4/stack-exchange-preferences
|
The model that fits the data doesn't have to be a time series model; I would advise thinking outside the box a little.
If you have multiple variables (e.g. age, gender, diet, ethnicity, illness, medication) you can use these for a different model. Maybe having certain patients in the same room is an important predictor? Or perhaps it has to do with the attending staff? Or consider using a multi-variate time series model (e.g. VECM) if you have other variables that you can use. Look at the relationships between violence across patients: do certain patients act out together?
The time series model is useful if time has some important role in the behavior. For instance, there might be a clustering of violence. Look at the volatility clustering literature. As @Jonas suggests, with a lag order of 2, you may need to be on higher alert on the day following a burst in violence. But that doesn't help you prevent the first day: there may be other information that you can link into the analysis to actually understand the cause of the violence, rather than simply forcasting it in a time series fashion.
Lastly, as a technical suggestion: if you're using R for the analysis, you might have a look at the forecast package from Rob Hyndman (the creator of this site). This has many very nice features; see the paper "Automatic Time Series Forecasting: The forecast Package for R" in the Journal of Statistical Software.
|
What is the easiest way to understand boosting?
Why doesn't it boost very weak classifiers "to infinity" (perfection)?
|
[
"https://stats.stackexchange.com/questions/256",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/217/"
] | 6
|
HuggingFaceH4/stack-exchange-preferences
|
In plain English: If your classifier misclassifies some data, train another copy of it mainly on this misclassified part with hope that it will discover something subtle. And then, as usual, iterate. On the way there are some voting schemes that allow to combine all those classifiers' predictions in sensible way.
Because sometimes it is impossible (the noise is just hiding some of the information, or it is not even present in the data); on the other hand, boosting too much may lead to overfitting.
|
We may assume that we have CSV file and we want a very basic line plot with several lines on one plot and a simple legend.
|
[
"https://stats.stackexchange.com/questions/257",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/217/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
The easiest way is to use R
Use read.csv to enter the data into R, then use a combination of the plot and line commands
If you want something really special, then look at the libraries ggplot2 or lattice.
In ggplot2 the following commands should get you started.
require(ggplot2)
#You would use read.csv here
N = 10
d = data.frame(x=1:N,y1=runif(N),y2=rnorm(N), y3 = rnorm(N, 0.5))
p = ggplot(d)
p = p+geom_line(aes(x, y1, colour="Type 1"))
p = p+geom_line(aes(x, y2, colour="Type 2"))
p = p+geom_line(aes(x, y3, colour="Type 3"))
#Add points
p = p+geom_point(aes(x, y3, colour="Type 3"))
print(p)
This would give you the following plot:
Saving plots in R
Saving plots in R is straightforward:
#Look at ?jpeg to other different saving options
jpeg("figure.jpg")
print(p)#for ggplot2 graphics
dev.off()
Instead of jpeg's you can also save as a pdf or postscript file:
#This example uses R base graphics
#Just change to print(p) for ggplot2
pdf("figure.pdf")
plot(d$x,y1, type="l")
lines(d$x, y2)
dev.off()
|
Rules:
one classifier per answer
vote up if you agree
downvote/remove duplicates.
put your application in the comment
|
[
"https://stats.stackexchange.com/questions/258",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/217/"
] | 4
|
HuggingFaceH4/stack-exchange-preferences
|
Support vector machine
|
If I have two lists A and B, both of which are subsets of a much larger list C, how can I determine if the degree of overlap of A and B is greater than I would expect by chance?
Should I just randomly select elements from C of the same lengths as lists A and B and determine that random overlap, and do this many times to determine some kind or empirical p-value? Is there a better way to test this?
|
[
"https://stats.stackexchange.com/questions/267",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/194/"
] | 4
|
HuggingFaceH4/stack-exchange-preferences
|
If I understand your question correctly, you need to use the Hypergeometric distribution. This distribution is usually associated with urn models, i.e there are $n$ balls in an urn, $y$ are painted red, and you draw $m$ balls from the urn. Then if $X$ is the number of balls in your sample of $m$ that are red, $X$ has a hyper-geometric distribution.
For your specific example, let $n_A$, $n_B$ and $n_C$ denote the lengths of your three lists and let $n_{AB}$ denote the overlap between $A$ and $B$. Then
$$n_{AB} \sim \text{HG}(n_A, n_C, n_B)$$
To calculate a p-value, you could use this R command:
#Some example values
n_A = 100;n_B = 200; n_C = 500; n_A_B = 50
1-phyper(n_A_B, n_B, n_C-n_B, n_A)
[1] 0.008626697
Word of caution. Remember multiple testing, i.e. if you have lots of A and B lists, then you will need to adjust your p-values with a correction. For the example the FDR or Bonferroni corrections.
|
What is the difference between a population and a sample? What common variables and statistics are used for each one, and how do those relate to each other?
|
[
"https://stats.stackexchange.com/questions/269",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/62/"
] | 6
|
HuggingFaceH4/stack-exchange-preferences
|
The population is the set of entities under study. For example, the mean height of men. This is a hypothetical population because it includes all men that have lived, are alive and will live in the future. I like this example because it drives home the point that we, as analysts, choose the population that we wish to study. Typically it is impossible to survey/measure the entire population because not all members are observable (e.g. men who will exist in the future). If it is possible to enumerate the entire population it is often costly to do so and would take a great deal of time. In the example above we have a population "men" and a parameter of interest, their height.
Instead, we could take a subset of this population called a sample and use this sample to draw inferences about the population under study, given some conditions. Thus we could measure the mean height of men in a sample of the population which we call a statistic and use this to draw inferences about the parameter of interest in the population. It is an inference because there will be some uncertainty and inaccuracy involved in drawing conclusions about the population based upon a sample. This should be obvious - we have fewer members in our sample than our population therefore we have lost some information.
There are many ways to select a sample and the study of this is called sampling theory. A commonly used method is called Simple Random Sampling (SRS). In SRS each member of the population has an equal probability of being included in the sample, hence the term "random". There are many other sampling methods e.g. stratified sampling, cluster sampling, etc which all have their advantages and disadvantages.
It is important to remember that the sample we draw from the population is only one from a large number of potential samples. If ten researchers were all studying the same population, drawing their own samples then they may obtain different answers. Returning to our earlier example, each of the ten researchers may come up with a different mean height of men i.e. the statistic in question (mean height) varies of sample to sample -- it has a distribution called a sampling distribution. We can use this distribution to understand the uncertainty in our estimate of the population parameter.
The sampling distribution of the sample mean is known to be a normal distribution with a standard deviation equal to the sample standard deviation divided by the sample size. Because this could easily be confused with the standard deviation of the sample it more common to call the standard deviation of the sampling distribution the standard error.
|
Due to the factorial in a poisson distribution, it becomes unpractical to estimate poisson models (for example, using maximum likelihood) when the observations are large. So, for example, if I am trying to estimate a model to explain the number of suicides in a given year (only annual data are available), and say, there are thousands of suicides every year, is it wrong to express suicides in hundreds, so that 2998 would be 29.98 ~= 30? In other words, is it wrong to change the unit of measurement to make the data manageable?
|
[
"https://stats.stackexchange.com/questions/270",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/90/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
When you're dealing with a Poisson distribution with large values of \lambda (its parameter), it is common to use a normal approximation to the Poisson distribution.
As this site mentions, it's all right to use the normal approximation when \lambda gets over 20, and the approximation improves as \lambda gets even higher.
The Poisson distribution is defined only over the state space consisting of the non-negative integers, so rescaling and rounding is going to introduce odd things into your data.
Using the normal approx. for large Poisson statistics is VERY common.
|
Is there a rule-of thumb or even any way at all to tell how large a sample should be in order to estimate a model with a given number of parameters?
So, for example, if I want to estimate a least-squares regression with 5 parameters, how large should the sample be?
Does it matter what estimation technique you are using (e.g. maximum likelihood, least squares, GMM), or how many or what tests you are going to perform? Should the sample variability be taken into account when making the decision?
|
[
"https://stats.stackexchange.com/questions/276",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/90/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
The trivial answer is that more data are always preferred to less data.
The problem of small sample size is clear. In linear regression (OLS) technically you can fit a model such as OLS where n = k+1 but you will get rubbish out of it i.e. very large standard errors. There is a great paper by Arthur Goldberger called Micronumerocity on this topic which is summarized in chapter 23 of his book A Course in Econometrics.
A common heuristic is that you should have 20 observations for every parameter you want to estimate. It is always a trade off between the size of your standard errors (and therefore significance testing) and the size of your sample. This is one reason some of us hate significance testing as you can get an incredibly small (relative) standard error with an enormous sample and therefore find pointless statistical significance on naive tests such as whether a regression coefficient is zero.
While sample size is important the quality of your sample is more important e.g. whether the sample is generalisable to the population, is it a Simple Random Sample or some other appropriate sampling methodology (and have this been accounted for during analysis), is there measurement error, response bias, selection bias, etc.
|
When would one prefer to use a Conditional Autoregressive model over a Simultaneous Autoregressive model when modelling autocorrelated geo-referenced aerial data?
|
[
"https://stats.stackexchange.com/questions/277",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/215/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
Non-spatial model
My House Value is a function of my home Gardening Investment.
SAR model
My House Value is a function of the House Values of my neighbours.
CAR model
My House Value is a function of the Gardening Investment of my neighbours.
|
What is meant when we say we have a saturated model?
|
[
"https://stats.stackexchange.com/questions/283",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/215/"
] | 7
|
HuggingFaceH4/stack-exchange-preferences
|
A saturated model is one in which there are as many estimated parameters as data points. By definition, this will lead to a perfect fit, but will be of little use statistically, as you have no data left to estimate variance.
For example, if you have 6 data points and fit a 5th-order polynomial to the data, you would have a saturated model (one parameter for each of the 5 powers of your independant variable plus one for the constant term).
|
Suppose that I culture cancer cells in n different dishes g₁, g₂, … , gn and observe the number of cells ni in each dish that look different than normal. The total number of cells in dish gi is ti. There is individual differences between individual cells, but also differences between the populations in different dishes because each dish has a slightly different temperature, amount of liquid, and so on.
I model this as a beta-binomial distribution: ni ~ Binomial(pi, ti) where pi ~ Beta(α, β). Given a number of observations of ni and ti, how can I estimate α and β?
|
[
"https://stats.stackexchange.com/questions/288",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/220/"
] | 7
|
HuggingFaceH4/stack-exchange-preferences
|
A saturated model is one in which there are as many estimated parameters as data points. By definition, this will lead to a perfect fit, but will be of little use statistically, as you have no data left to estimate variance.
For example, if you have 6 data points and fit a 5th-order polynomial to the data, you would have a saturated model (one parameter for each of the 5 powers of your independant variable plus one for the constant term).
|
I know of Cameron and Trivedi's Microeconometrics Using Stata.
What are other good texts for learning Stata?
|
[
"https://stats.stackexchange.com/questions/290",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/189/"
] | 4
|
HuggingFaceH4/stack-exchange-preferences
|
The UCLA resource listed by Stephen Turner (below) is excellent if you just want to apply methods you're already familiar with using Stata.
If you're looking for textbooks which teach you statistics/econometrics while using Stata then these are solid recommendations (but it depends at what level you're looking at):
Introductory Methods
An Introduction to Modern Econometrics Using Stata by Chris Baum
Introduction to Econometrics by Chris Dougherty
Advanced/Specialised Methods
Multilevel and Longitudinal Modeling Using Stata by Rabe-Hesketh and Skrondal
Regression Models for Categorical Dependent Variables Using Stata by Long and Freese
|
Am I looking for a better behaved distribution for the independent variable in question, or to reduce the effect of outliers, or something else?
|
[
"https://stats.stackexchange.com/questions/298",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/125/"
] | 8
|
HuggingFaceH4/stack-exchange-preferences
|
I always hesitate to jump into a thread with as many excellent responses as this, but it strikes me that few of the answers provide any reason to prefer the logarithm to some other transformation that "squashes" the data, such as a root or reciprocal.
Before getting to that, let's recapitulate the wisdom in the existing answers in a more general way. Some non-linear re-expression of the dependent variable is indicated when any of the following apply:
The residuals have a skewed distribution. The purpose of a transformation is to obtain residuals that are approximately symmetrically distributed (about zero, of course).
The spread of the residuals changes systematically with the values of the dependent variable ("heteroscedasticity"). The purpose of the transformation is to remove that systematic change in spread, achieving approximate "homoscedasticity."
To linearize a relationship.
When scientific theory indicates. For example, chemistry often suggests expressing concentrations as logarithms (giving activities or even the well-known pH).
When a more nebulous statistical theory suggests the residuals reflect "random errors" that do not accumulate additively.
To simplify a model. For example, sometimes a logarithm can simplify the number and complexity of "interaction" terms.
(These indications can conflict with one another; in such cases, judgment is needed.)
So, when is a logarithm specifically indicated instead of some other transformation?
The residuals have a "strongly" positively skewed distribution. In his book on EDA, John Tukey provides quantitative ways to estimate the transformation (within the family of Box-Cox, or power, transformations) based on rank statistics of the residuals. It really comes down to the fact that if taking the log symmetrizes the residuals, it was probably the right form of re-expression; otherwise, some other re-expression is needed.
When the SD of the residuals is directly proportional to the fitted values (and not to some power of the fitted values).
When the relationship is close to exponential.
When residuals are believed to reflect multiplicatively accumulating errors.
You really want a model in which marginal changes in the explanatory variables are interpreted in terms of multiplicative (percentage) changes in the dependent variable.
Finally, some non - reasons to use a re-expression:
Making outliers not look like outliers. An outlier is a datum that does not fit some parsimonious, relatively simple description of the data. Changing one's description in order to make outliers look better is usually an incorrect reversal of priorities: first obtain a scientifically valid, statistically good description of the data and then explore any outliers. Don't let the occasional outlier determine how to describe the rest of the data!
Because the software automatically did it. (Enough said!)
Because all the data are positive. (Positivity often implies positive skewness, but it does not have to. Furthermore, other transformations can work better. For example, a root often works best with counted data.)
To make "bad" data (perhaps of low quality) appear well behaved.
To be able to plot the data. (If a transformation is needed to be able to plot the data, it's probably needed for one or more good reasons already mentioned. If the only reason for the transformation truly is for plotting, go ahead and do it--but only to plot the data. Leave the data untransformed for analysis.)
|
It seems like when the assumption of homogeneity of variance is met that the results from a Welch adjusted t-test and a standard t-test are approximately the same. Why not simply always use the Welch adjusted t?
|
[
"https://stats.stackexchange.com/questions/305",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/196/"
] | 6
|
HuggingFaceH4/stack-exchange-preferences
|
I would like to oppose the other two answers based on a paper (in German) by Kubinger, Rasch and Moder (2009).
They argue, based on "extensive" simulations from distributions either meeting or not meeting the assumptions imposed by a t-test, (normality and homogenity of variance) that the welch-tests performs equally well when the assumptions are met (i.e., basically same probability of committing alpha and beta errors) but outperforms the t-test if the assumptions are not met, especially in terms of power. Therefore, they recommend to always use the welch-test if the sample size exceeds 30.
As a meta-comment: For people interested in statistics (like me and probably most other here) an argument based on data (as mine) should at least count equally as arguments solely based on theoretical grounds (as the others here).
Update:
After thinking about this topic again, I found two further recommendations of which the newer one assists my point. Look at the original papers (which are both, at least for me, freely available) for the argumentations that lead to these recommendations.
The first recommendation comes from Graeme D. Ruxton in 2006: "If you want to compare the central tendency of 2 populations based on samples of unrelated data, then the unequal variance t-test should always be used in preference to the Student's t-test or Mann–Whitney U test."
In:
Ruxton, G.D., 2006. The unequal variance t-test is an underused
alternative to Student’s t-test and the Mann–Whitney U test.
Behav. Ecol. 17, 688–690.
The second (older) recommendation is from Coombs et al. (1996, p. 148): "In summary, the independent samples t test is generally acceptable in terms of controlling Type I error rates provided there are sufficiently large equal-sized samples, even when the equal population variance assumption is violated. For unequal-sized samples, however, an alternative that does not assume equal population variances is preferable. Use the James second-order test when distributions are either short-tailed symmetric or normal. Promising alternatives include the Wilcox H and Yuen trimmed means tests, which provide broader control of Type I error rates than either the Welch test or the James test and have greater power when data are long-tailed." (emphasis added)
In:
Coombs WT, Algina J, Oltman D. 1996. Univariate and multivariate omnibus hypothesis tests selected to control type I error rates when population variances are not necessarily equal. Rev Educ Res 66:137–79.
|
I'm looking for a book or online resource that explains different kinds of entropy such as Sample Entropy and Shannon Entropy and their advantages and disadvantages.
Can someone point me in the right direction?
|
[
"https://stats.stackexchange.com/questions/322",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/3807/"
] | 4
|
HuggingFaceH4/stack-exchange-preferences
|
Cover and Thomas's book Elements of Information Theory is a good source on entropy and its applications, although I don't know that it addresses exactly the issues you have in mind.
|
I realize that the statistical analysis of financial data is a huge topic, but that is exactly why it is necessary for me to ask my question as I try to break into the world of financial analysis.
As at this point I know next to nothing about the subject, the results of my google searches are overwhelming. Many of the matches advocate learning specialized tools or the R programming language. While I will learn these when they are necessary, I'm first interested in books, articles or any other resources that explain modern methods of statistical analysis specifically for financial data. I assume there are a number of different wildly varied methods for analyzing data, so ideally I'm seeking an overview of the various methods that are practically applicable. I'd like something that utilizes real world examples that a beginner is capable of grasping but that aren't overly simplistic.
What are some good resources for learning bout the statistical analysis of financial data?
|
[
"https://stats.stackexchange.com/questions/328",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/75/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
You might start with this series of lectures by Robert Shiller at Yale. He gives a good overview of the field.
My favorite books on the subject:
I strongly recommend starting with Statistics and Finance, by David Ruppert (the R code for the book is available). This is a great introduction and covers the basics of finance and statistics so it's appropriate as a first book.
Modeling Financial Time Series with S-Plus, by Eric Zivot
Analysis of Financial Time Series, by Ruey Tsay
Time Series Analysis, by Jonathan D. Cryer
Beyond that, you may want some general resources, and the "bible" of finance is Options, Futures, and Other Derivatives by John Hull.
Lastly, in terms of some good general books, you might start with these two:
A Random Walk Down Wall Street
Against the Gods: The Remarkable Story of Risk
|
Do you think that unbalanced classes is a big problem for k-nearest neighbor? If so, do you know any smart way to handle this?
|
[
"https://stats.stackexchange.com/questions/341",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/-1/"
] | 4
|
HuggingFaceH4/stack-exchange-preferences
|
I believe Peter Smit's response above is confusing K nearest neighbor (KNN) and K-means, which are very different.
KNN is susceptible to class imbalance, as described well here: https://www.quora.com/Why-does-knn-get-effected-by-the-class-imbalance
|
I'm looking for a good algorithm (meaning minimal computation, minimal storage requirements) to estimate the median of a data set that is too large to store, such that each value can only be read once (unless you explicitly store that value). There are no bounds on the data that can be assumed.
Approximations are fine, as long as the accuracy is known.
Any pointers?
|
[
"https://stats.stackexchange.com/questions/346",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/247/"
] | 4
|
HuggingFaceH4/stack-exchange-preferences
|
How about something like a binning procedure? Assume (for illustration purposes) that you know that the values are between 1 and 1 million. Set up N bins, of size S. So if S=10000, you'd have 100 bins, corresponding to values [1:10000, 10001:20000, ... , 990001:1000000]
Then, step through the values. Instead of storing each value, just increment the counter in the appropriate bin. Using the midpoint of each bin as an estimate, you can make a reasonable approximation of the median. You can scale this to as fine or coarse of a resolution as you want by changing the size of the bins. You're limited only by how much memory you have.
Since you don't know how big your values may get, just pick a bin size large enough that you aren't likely to run out of memory, using some quick back-of-the-envelope calculations. You might also store the bins sparsely, such that you only add a bin if it contains a value.
Edit:
The link ryfm provides gives an example of doing this, with the additional step of using the cumulative percentages to more accurately estimate the point within the median bin, instead of just using midpoints. This is a nice improvement.
|
Why do we seek to minimize x^2 instead of minimizing |x|^1.95 or |x|^2.05.
Are there reasons why the number should be exactly two or is it simply a convention that has the advantage of simplifying the math?
|
[
"https://stats.stackexchange.com/questions/354",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/3807/"
] | 4
|
HuggingFaceH4/stack-exchange-preferences
|
There's no reason you couldn't try to minimize norms other than x^2, there have been entire books written on quantile regression, for instance, which is more or less minimizing |x| if you're working with the median. It's just generally harder to do and, depending on the error model, may not give good estimators (depending on whether that means low-variance or unbiased or low MSE estimators in the context).
As for why we prefer integer moments over real-number-valued moments, the main reason is likely that while integer powers of real numbers always result in real numbers, non-integer powers of negative real numbers create complex numbers, thus requiring the use of an absolute value. In other words, while the 3rd moment of a real-valued random variable is real, the 3.2nd moment is not necessarily real, and so causes interpretation problems.
Other than that...
Analytical expressions for the integer moments of random variables are typically much easier to find than real-valued moments, be it by generating functions or some other method. Methods to minimize them are thus easier to write.
The use of integer moments leads to expressions that are more tractable than real-valued moments.
I can't think of a compelling reason that (for instance) the 1.95th moment of the absolute value of X would provide better fitting properties than (for instance) the 2nd moment of X, although that could be interesting to investigate
Specific to the L2 norm (or squared error), it can be written via dot products, which can lead to vast improvements in speed of computation. It's also the only Lp space that's a Hilbert space, which is a nice feature to have.
|
The Wald, Likelihood Ratio and Lagrange Multiplier tests in the context of maximum likelihood estimation are asymptotically equivalent. However, for small samples, they tend to diverge quite a bit, and in some cases they result in different conclusions.
How can they be ranked according to how likely they are to reject the null? What to do when the tests have conflicting answers? Can you just pick the one which gives the answer you want or is there a "rule" or "guideline" as to how to proceed?
|
[
"https://stats.stackexchange.com/questions/359",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/90/"
] | 4
|
HuggingFaceH4/stack-exchange-preferences
|
I do not know the literature in the area well enough to offer a direct response. However, it seems to me that if the three tests differ then that is an indication that you need further research/data collection in order to definitively answer your question.
You may also want to look at this Google Scholar search
Update in response to your comment:
If collecting additional data is not possible then there is one workaround. Do a simulation which mirrors your data structure, sample size and your proposed model. You can set the parameters to some pre-specified values. Estimate the model using the data generated and then check which one of the three tests points you to the right model. Such a simulation would offer some guidance as to which test to use for your real data. Does that make sense?
|
What is the difference between the Shapiro–Wilk test of normality and the Kolmogorov–Smirnov test of normality? When will results from these two methods differ?
|
[
"https://stats.stackexchange.com/questions/362",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/196/"
] | 6
|
HuggingFaceH4/stack-exchange-preferences
|
You can't really even compare the two since the Kolmogorov-Smirnov is for a completely specified distribution (so if you're testing normality, you must specify the mean and variance; they can't be estimated from the data*), while the Shapiro-Wilk is for normality, with unspecified mean and variance.
* you also can't standardize by using estimated parameters and test for standard normal; that's actually the same thing.
One way to compare would be to supplement the Shapiro-Wilk with a test for specified mean and variance in a normal (combining the tests in some manner), or by having the KS tables adjusted for the parameter estimation (but then it's no longer distribution-free).
There is such a test (equivalent to the Kolmogorov-Smirnov with estimated parameters) - the Lilliefors test; the normality-test version could be validly compared to the Shapiro-Wilk (and will generally have lower power). More competitive is the Anderson-Darling test (which must also be adjusted for parameter estimation for a comparison to be valid).
As for what they test - the KS test (and the Lilliefors) looks at the largest difference between the empirical CDF and the specified distribution, while the Shapiro Wilk effectively compares two estimates of variance; the closely related Shapiro-Francia can be regarded as a monotonic function of the squared correlation in a Q-Q plot; if I recall correctly, the Shapiro-Wilk also takes into account covariances between the order statistics.
Edited to add: While the Shapiro-Wilk nearly always beats the Lilliefors test on alternatives of interest, an example where it doesn't is the $t_{30}$ in medium-large samples ($n>60$-ish). There the Lilliefors has higher power.
[It should be kept in mind that there are many more tests for normality that are available than these.]
|
If you could go back in time and tell yourself to read a specific book at the beginning of your career as a statistician, which book would it be?
|
[
"https://stats.stackexchange.com/questions/363",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/74/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
I am no statistician, and I haven't read that much on the topic, but perhaps
Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century
should be mentioned? It is no textbook, but still worth reading.
|
What topics in statistics are most useful/relevant to data mining?
|
[
"https://stats.stackexchange.com/questions/372",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/252/"
] | 4
|
HuggingFaceH4/stack-exchange-preferences
|
Understanding multivariate normal distribution http://en.wikipedia.org/wiki/Multivariate_normal_distribution is important.
The concept of correlation and more generally (non linear) dependence is important.
Concentration of measure, asymptotic normality, convergence of random variables.... how to make something from random to deterministic! http://en.wikipedia.org/wiki/Convergence_of_random_variables
maximum likelihood estimation http://en.wikipedia.org/wiki/Maximum_likelihood and before that, statistical modeling :) and more generally minimum contrast estimation.
stationary process http://en.wikipedia.org/wiki/Stationary_process and more generally stationnarity assumption and ergodic property.
as Peter said, the question is so broad ... that the answer couldn't be given in a post ...
|
From Wikipedia :
Suppose you're on a game show, and
you're given the choice of three
doors: Behind one door is a car;
behind the others, goats. You pick a
door, say No. 1, and the host, who
knows what's behind the doors, opens
another door, say No. 3, which has a
goat. He then says to you, "Do you
want to pick door No. 2?" Is it to
your advantage to switch your choice?
The answer is, of course, yes - but it's incredibly un-inituitive. What misunderstanding do most people have about probability that leads to us scratching our heads -- or better put; what general rule can we take away from this puzzle to better train our intuition in the future?
|
[
"https://stats.stackexchange.com/questions/373",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/252/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
Consider two simple variations of the problem:
No doors are opened for the contestant. The host offers no help in picking a door. In this case it is obvious that the odds of picking the correct door are 1/3.
Before the contestant is asked to venture a guess, the host opens a door and reveals a goat. After the host reveals a goat, the contestant has to pick the car from the two remaining doors. In this case it is obvious that the odds of picking the correct door is 1/2.
For a contestant to know the probability of his door choice being correct, he has to know how many positive outcomes are available to him and divide that number by the amount of possible outcomes. Because of the two simple cases outlined above, it is very natural to think of all the possible outcomes available as the number of doors to choose from, and the amount of positive outcomes as the number of doors that conceal a car. Given this intuitive assumption, even if the host opens a door to reveal a goat after the contestant makes a guess, the probability of either door containing a car remains 1/2.
In reality, probability recognizes a set of possible outcomes larger than the three doors and it recognizes a set of positive outcomes that is larger than the singular door with the car. In the correct analysis of the problem, the host provides the contestant with new information making a new question to be addressed: what is the probability that my original guess is such that the new information provided by the host is sufficient to inform me of the correct door? In answering this question, the set of positive outcomes and the set of possible outcomes are not tangible doors and cars but rather abstract arrangements of the goats and car. The three possible outcomes are the three possible arrangements of two goats and one car behind three doors. The two positive outcomes are the two possible arrangements where the first guess of the contestant is false. In each of these two arrangements, the information given by the host (one of the two remaining doors is empty) is sufficient for the contestant to determine the door that conceals the car.
In summation:
We have a tendency to look for a simple mapping between physical manifestations of our choices (the doors and the cars) and the number of possible outcomes and desired outcomes in a question of probability. This works fine in cases where no new information is provided to the contestant. However, if the contestant is provided with more information (ie one of the doors you didn't choose is certainly not a car), this mapping breaks down and the correct question to be asked is found to be more abstract.
|
I usually make my own idiosyncratic choices when preparing plots. However, I wonder if there are any best practices for generating plots.
Note: Rob's comment to an answer to this question is very relevant here.
|
[
"https://stats.stackexchange.com/questions/396",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/-1/"
] | 6
|
HuggingFaceH4/stack-exchange-preferences
|
The Tufte principles are very good practices when preparing plots. See also his book Beautiful Evidence
The principles include:
Keep a high data-ink ratio
Remove chart junk
Give graphical element multiple functions
Keep in mind the data density
The term to search for is Information Visualization
|
There are many ways to measure how similar two probability distributions are. Among methods which are popular (in different circles) are:
the Kolmogorov distance: the sup-distance between the distribution functions;
the Kantorovich-Rubinstein distance: the maximum difference between the expectations w.r.t. the two distributions of functions with Lipschitz constant $1$, which also turns out to be the $L^1$ distance between the distribution functions;
the bounded-Lipschitz distance: like the K-R distance but the functions are also required to have absolute value at most $1$.
These have different advantages and disadvantages. Only convergence in the sense of 3. actually corresponds precisely to convergence in distribution; convergence in the sense of 1. or 2. is slightly stronger in general. (In particular, if $X_n=\frac{1}{n}$ with probability $1$, then $X_n$ converges to $0$ in distribution, but not in the Kolmogorov distance. However, if the limit distribution is continuous then this pathology doesn't occur.)
From the perspective of elementary probability or measure theory, 1. is very natural because it compares the probabilities of being in some set. A more sophisticated probabilistic perspective, on the other hand, tends to focus more on expectations than probabilities. Also, from the perspective of functional analysis, distances like 2. or 3. based on duality with some function space are very appealing, because there is a large set of mathematical tools for working with such things.
However, my impression (correct me if I'm wrong!) is that in statistics, the Kolmogorov distance is the usually preferred way of measuring similarity of distributions. I can guess one reason: if one of the distributions is discrete with finite support -- in particular, if it is the distribution of some real-world data -- then the Kolmogorov distance to a model distribution is easy to compute. (The K-R distance would be slightly harder to compute, and the B-L distance would probably be impossible in practical terms.)
So my question (finally) is, are there other reasons, either practical or theoretical, to favor the Kolmogorov distance (or some other distance) for statistical purposes?
|
[
"https://stats.stackexchange.com/questions/411",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/89/"
] | 4
|
HuggingFaceH4/stack-exchange-preferences
|
Mark,
the main reason of which I am aware for the use of K-S is because it arises naturally from Glivenko-Cantelli theorems in univariate empirical processes. The one reference I'd recommend is A.W.van der Vaart "Asymptotic Statistics", ch. 19. A more advanced monograph is "Weak Convergence and Empirical Processes" by Wellner and van der Vaart.
I'd add two quick notes:
another measure of distance commonly used in univariate distributions is the Cramer-von Mises distance, which is an L^2 distance;
in general vector spaces different distances are employed; the space of interest in many papers is polish. A very good introduction is Billingsley's "Convergence of Probability Measures".
I apologize if I can't be more specific. I hope this helps.
|
What is a good introduction to statistics for a mathematician who is already well-versed in probability? I have two distinct motivations for asking, which may well lead to different suggestions:
I'd like to better understand the statistics motivation behind many problems considered by probabilists.
I'd like to know how to better interpret the results of Monte Carlo simulations which I sometimes do to form mathematical conjectures.
I'm open to the possibility that the best way to go is not to look for something like "Statistics for Probabilists" and just go to a more introductory source.
|
[
"https://stats.stackexchange.com/questions/414",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/89/"
] | 6
|
HuggingFaceH4/stack-exchange-preferences
|
As you said, it's not necessarily the case that a mathematician may want a rigorous book. Maybe the goal is to get some intuition of the concepts quickly, and then fill in the details. I recommend two books from CMU professors, both published by Springer: "All of Statistics" by Larry Wasserman is quick and informal. "Theory of Statistics" by Mark Schervish is rigorous and relatively complete. It has decision theory, finite sample, some asymptotics and sequential analysis.
Added 7/28/10: There is one additional reference that is orthogonal to the other two: very rigorous, focused on learning theory, and short. It's by Smale (Steven Smale!) and Cucker, "On the Mathematical Foundations of Learning". Not easy read, but the best crash course on the theory.
|
Coming from the field of computer vision, I've often used the RANSAC (Random Sample Consensus) method for fitting models to data with lots of outliers.
However, I've never seen it used by statisticians, and I've always been under the impression that it wasn't considered a "statistically-sound" method. Why is that so? It is random in nature, which makes it harder to analyze, but so are bootstrapping methods.
Or is simply a case of academic silos not talking to one another?
|
[
"https://stats.stackexchange.com/questions/418",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/77/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
I think that the key here is the discarding of a large portion of the data in RANSAC.
In most statistical applications, some distributions may have heavy tails, and therefore small sample numbers may skew statistical estimation. Robust estimators solve this by weighing the data differently. RANSAC on the other hand makes no attempt to accommodate the outliers, it's built for cases where the data points genuinely don't belong, not just distributed non-normaly.
|
What book would you recommend for scientists who are not statisticians?
Clear delivery is most appreciated. As well as the explanation of the appropriate techniques and methods for typical tasks: time series analysis, presentation and aggregation of large data sets.
|
[
"https://stats.stackexchange.com/questions/421",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/219/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
The answer would most definitely depend on their discipline, the methods/techniques that they would like to learn and their existing mathematical/statistical abilities.
For example, economists/social scientists who want to learn about cutting edge empirical econometrics could read Angrist and Pischke's Mostly Harmless Econometrics. This is a non-technical book covering the "natural experimental revolution" in economics. The book only presupposes that they know what regression is.
But I think the best book on applied regression is Gelman and Hill's Data Analysis Using Regression and Multilevel/Hierarchical Models. This covers basic regression, multilevel regression, and Bayesian methods in a clear and intuitive way. It would be good for any scientist with a basic background in statistics.
|
Data analysis cartoons can be useful for many reasons: they help communicate; they show that quantitative people have a sense of humor too; they can instigate good teaching moments; and they can help us remember important principles and lessons.
This is one of my favorites:
As a service to those who value this kind of resource, please share your favorite data analysis cartoon. They probably don't need any explanation (if they do, they're probably not good cartoons!) As always, one entry per answer. (This is in the vein of the Stack Overflow question What’s your favorite “programmer” cartoon?.)
P.S. Do not hotlink the cartoon without the site's permission please.
|
[
"https://stats.stackexchange.com/questions/423",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/5/"
] | 8
|
HuggingFaceH4/stack-exchange-preferences
|
Was XKCD, so time for Dilbert:
Source: http://dilbert.com/strip/2001-10-25
|
It has been suggested by Angrist and Pischke that Robust (i.e. robust to heteroskedasticity or unequal variances) Standard Errors are reported as a matter of course rather than testing for it. Two questions:
What is impact on the standard errors of doing so when there is homoskedasticity?
Does anybody actually do this in their work?
|
[
"https://stats.stackexchange.com/questions/452",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/215/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
Using robust standard errors has become common practice in economics. Robust standard errors are typically larger than non-robust (standard?) standard errors, so the practice can be viewed as an effort to be conservative.
In large samples (e.g., if you are working with Census data with millions of observations or data sets with "just" thousands of observations), heteroskedasticity tests will almost surely turn up positive, so this approach is appropriate.
Another means for combating heteroskedasticity is weighted least squares, but this approach has become looked down upon because it changes the estimates for parameters, unlike the use of robust standard errors. If your weights are incorrect, your estimates are biased. If your weights are right, however, you get smaller ("more efficient") standard errors than OLS with robust standard errors.
|
We're plotting time-series metrics in the context of network/server operations. The data has a 5-minute sample rate, and consists of things like CPU utilization, error rate, etc.
We're adding a horizontal "threshold" line to the graphs, to visually indicate a value threshold above which people should worry/take notice. For example, in the CPU utilization example, perhaps the "worry" threshold is 75%.
My team has some internal debate over what color this line should be:
Something like a bright red that clearly stands out from the background grid and data lines, and indicates this is a warning condition
Something more subtle and definitely NOT red, since the "ink" for the line doesn't represent any actual data, and thus attention shouldn't be drawn to it unnecessarily.
Would appreciate guidance / best practices...
|
[
"https://stats.stackexchange.com/questions/459",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/259/"
] | 4
|
HuggingFaceH4/stack-exchange-preferences
|
If it does not break your styleguide I would rather color the background of the plots red/(yellow/)green than just plotting a line. In my imagination this should make it pretty clear to a user that values are fine on green and to be checked on red. Just my 5¢.
|
I am not an expert of random forest but I clearly understand that the key issue with random forest is the (random) tree generation. Can you explain me how the trees are generated? (i.e. What is the used distribution for tree generation?)
Thanks in advance !
|
[
"https://stats.stackexchange.com/questions/480",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/223/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
Implementations of RF differ slightly. I know that Salford Systems' proprietary implementation is supposed to be better than the vanilla one in R. A description of the algorithm is in ESL by Friedman-Hastie-Tibshirani, 2nd ed, 3rd printing. An entire chapter (15th) is devoted to RF, and I find it actually clearer than the original paper. The tree construction algorithm is detailed on p.588; no need for me to reproduce it here, since the book is available online.
|
Another question about time series from me.
I have a dataset which gives daily records of violent incidents in a psychiatric hospital over three years. With the help from my previous question I have been fiddling with it and am a bit happier about it now.
The thing I have now is that the daily series is very noisy. It fluctuates wildly, up and down, from 0 at times up to 20. Using loess plots and the forecast package (which I can highly recommend for novices like me) I just get a totally flat line, with massive confidence intervals from the forecast.
However, aggregating weekly or monthly the data make a lot more sense. They sweep down from the start of the series, and then increase again in the middle. Loess plotting and the forecast package both produce something that looks a lot more meaningful.
It does feel a bit like cheating though. Am I just preferring the aggregated versions because they look nice with no real validity to it?
Or would it be better to compute a moving average and use that as the basis? I'm afraid I don't understand the theory behind all this well enough to be confident about what is acceptable
|
[
"https://stats.stackexchange.com/questions/481",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/199/"
] | 4
|
HuggingFaceH4/stack-exchange-preferences
|
This totally depends on your time series and what effect you want to discover/proof etc.
An important thing here is, what kind of periods do you have in your data. Make a spectrum of you data and see what frequencies are common in you data.
Anyway, you are not lying when you decide to display aggregated values. When you are looking to effects that are occurring over weeks (like, more violence in summer when it's hot weather) it is the right thing to do.
Maybe you can also take a look at the Hilbert Huang Transform. This will give you Intrinsic Mode Functions that are very handy for visual analyses.
|
I have calculated AIC and AICc to compare two general linear mixed models; The AICs are positive with model 1 having a lower AIC than model 2. However, the values for AICc are both negative (model 1 is still < model 2). Is it valid to use and compare negative AICc values?
|
[
"https://stats.stackexchange.com/questions/486",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/266/"
] | 7
|
HuggingFaceH4/stack-exchange-preferences
|
All that matters is the difference between two AIC (or, better, AICc) values, representing the fit to two models. The actual value of the AIC (or AICc), and whether it is positive or negative, means nothing. If you simply changed the units the data are expressed in, the AIC (and AICc) would change dramatically. But the difference between the AIC of the two alternative models would not change at all.
Bottom line: Ignore the actual value of AIC (or AICc) and whether it is positive or negative. Ignore also the ratio of two AIC (or AICc) values. Pay attention only to the difference.
|
What are the variable/feature selection that you prefer for binary classification when there are many more variables/feature than observations in the learning set? The aim here is to discuss what is the feature selection procedure that reduces the best the classification error.
We can fix notations for consistency: for $i \in \{0, 1\}$, let $\{x_1^i,\dots, x_{n_i}^i\}$ be the learning set of observations from group $i$. So $n_0 + n_1 = n$ is the size of the learning set. We set $p$ to be the number of features (i.e. the dimension of the feature space). Let $x[i]$ denote the $i$-th coordinate of $x \in \mathbb{R}^p$.
Please give full references if you cannot give the details.
EDIT (updated continuously): Procedures proposed in the answers below
Greedy forward selection Variable selection procedure for binary classification
Backward elimination Variable selection procedure for binary classification
Metropolis scanning / MCMC Variable selection procedure for binary classification
penalized logistic regression Variable selection procedure for binary classification
As this is community wiki there can be more discussion and update
I have one remark: in a certain sense, you all give a procedure that permit ordering of variables but not variable selection (you are quite evasive on how to select the number of features, I guess you all use cross validation?) Can you improve the answers in this direction? (as this is community wiki you don't need to be the answer writter to add an information about how to select the number of variables? I have openned a question in this direction here Cross validation in very high dimension (to select the number of used variables in very high dimensional classification))
|
[
"https://stats.stackexchange.com/questions/490",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/223/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
A very popular approach is penalized logistic regression, in which one maximizes the sum of the log-likelihood and a penalization term consisting of the L1-norm ("lasso"), L2-norm ("ridge"), a combination of the two ("elastic"), or a penalty associated to groups of variables ("group lasso"). This approach has several advantages:
It has strong theoretical properties, e.g., see this paper by Candes & Plan and close connections to compressed sensing;
It has accessible expositions, e.g., in Elements of Statistical Learning by Friedman-Hastie-Tibshirani (available online);
It has readily available software to fit models. R has the glmnet package which is very fast and works well with pretty large datasets. Python has scikit-learn, which includes L1- and L2-penalized logistic regression;
It works very well in practice, as shown in many application papers in image recognition, signal processing, biometrics, and finance.
|
I've heard that when many regression model specifications (say, in OLS) are considered as possibilities for a dataset, this causes multiple comparison problems and the p-values and confidence intervals are no longer reliable. One extreme example of this is stepwise regression.
When can I use the data itself to help specify the model, and when is this not a valid approach? Do you always need to have a subject-matter-based theory to form the model?
|
[
"https://stats.stackexchange.com/questions/499",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/267/"
] | 4
|
HuggingFaceH4/stack-exchange-preferences
|
Variable selection techniques, in general (whether stepwise, backward, forward, all subsets, AIC, etc.), capitalize on chance or random patterns in the sample data that do not exist in the population. The technical term for this is over-fitting and it is especially problematic with small datasets, though it is not exclusive to them. By using a procedure that selects variables based on best fit, all of the random variation that looks like fit in this particular sample contributes to estimates and standard errors. This is a problem for both prediction and interpretation of the model.
Specifically, r-squared is too high and parameter estimates are biased (they are too far from 0), standard errors for parameters are too small (and thus p-values and intervals around parameters are too small/narrow).
The best line of defense against these problems is to build models thoughtfully and include the predictors that make sense based on theory, logic, and previous knowledge. If a variable selection procedure is necessary, you should select a method that penalizes the parameter estimates (shrinkage methods) by adjusting the parameters and standard errors to account for over-fitting. Some common shrinkage methods are Ridge Regression, Least Angle Regression, or the lasso. In addition, cross-validation using a training dataset and a test dataset or model-averaging can be useful to test or reduce the effects of over-fitting.
Harrell is a great source for a detailed discussion of these problems. Harrell (2001). "Regression Modeling Strategies."
|
What is your preferred method of checking for convergence when using Markov chain Monte Carlo for Bayesian inference, and why?
|
[
"https://stats.stackexchange.com/questions/507",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/215/"
] | 6
|
HuggingFaceH4/stack-exchange-preferences
|
I use the Gelman-Rubin convergence diagnostic as well. A potential problem with Gelman-Rubin is that it may mis-diagnose convergence if the shrink factor happens to be close to 1 by chance, in which case you can use a Gelman-Rubin-Brooks plot. See the "General Methods for Monitoring Convergence of Iterative Simulations" paper for details. This is supported in the coda package in R (for "Output analysis and diagnostics for Markov Chain Monte Carlo simulations"). coda also includes other functions (such as the Geweke’s convergence diagnostic).
You can also have a look at "boa: An R Package for MCMC Output Convergence
Assessment and Posterior Inference".
|
In the context of machine learning, what is the difference between
unsupervised learning
supervised learning and
semi-supervised learning?
And what are some of the main algorithmic approaches to look at?
|
[
"https://stats.stackexchange.com/questions/517",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/68/"
] | 6
|
HuggingFaceH4/stack-exchange-preferences
|
Generally, the problems of machine learning may be considered variations on function estimation for classification, prediction or modeling.
In supervised learning one is furnished with input ($x_1$, $x_2$, ...,) and output ($y_1$, $y_2$, ...,) and are challenged with finding a function that approximates this behavior in a generalizable fashion. The output could be a class label (in classification) or a real number (in regression)-- these are the "supervision" in supervised learning.
In the case of unsupervised learning, in the base case, you receives inputs $x_1$, $x_2$, ..., but neither target outputs, nor rewards from its environment are provided. Based on the problem (classify, or predict) and your background knowledge of the space sampled, you may use various methods: density estimation (estimating some underlying PDF for prediction), k-means clustering (classifying unlabeled real valued data), k-modes clustering (classifying unlabeled categorical data), etc.
Semi-supervised learning involves function estimation on labeled and unlabeled data. This approach is motivated by the fact that labeled data is often costly to generate, whereas unlabeled data is generally not. The challenge here mostly involves the technical question of how to treat data mixed in this fashion. See this Semi-Supervised Learning Literature Survey for more details on semi-supervised learning methods.
In addition to these kinds of learning, there are others, such as reinforcement learning whereby the learning method interacts with its environment by producing actions $a_1$, $a_2$, . . .. that produce rewards or punishments $r_1$, $r_2$, ...
|
Debugging MCMC programs is notoriously difficult. The difficulty arises because of several issues some of which are:
(a) Cyclic nature of the algorithm
We iteratively draw parameters conditional on all other parameters. Thus, if a implementation is not working properly it is difficult to isolate the bug as the issue can be anywhere in the iterative sampler.
(b) The correct answer is not necessarily known.
We have no way to tell if we have achieved convergence. To some extent this can be mitigated by testing the code on simulated data.
In light of the above issues, I was wondering if there is a standard technique that can be used to debug MCMC programs.
Edit
I wanted to share the approach I use to debug my own programs. I, of course, do all of the things that PeterR mentioned. Apart from those, I perform the following tests using simulated data:
Start all parameters from true values and see if the sampler diverges too far from the true values.
I have flags for each parameter in my iterative sampler that determines whether I am drawing that parameter in the iterative sampler. For example, if a flag 'gen_param1' is set to true then I draw 'param1' from its full conditional in the iterative sampler. If this is set to false then 'param1' is set to its true value.
Once I finish writing up the sampler, I test the program using the following recipe:
Set the generate flag for one parameter to true and everything else to false and assess convergence with respect to true value.
Set the generate flag for another parameter in conjunction with the first one and again assess convergence.
The above steps have been incredibly helpful to me.
|
[
"https://stats.stackexchange.com/questions/524",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/-1/"
] | 5
|
HuggingFaceH4/stack-exchange-preferences
|
Standard programming practice:
when debugging run the simulation with fixed sources of randomness (i.e. same seed) so that any changes are due to code changes and not different random numbers.
try your code on a model (or several models) where the answer IS known.
adopt good programming habits so that you introduce fewer bugs.
think very hard & long about the answers you do get, whether they make sense, etc.
I wish you good luck, and plenty of coffee!
|
As you know, there are two popular types of cross-validation, K-fold and random subsampling (as described in Wikipedia). Nevertheless, I know that some researchers are making and publishing papers where something that is described as a K-fold CV is indeed a random subsampling one, so in practice you never know what is really in the article you're reading.
Usually of course the difference is unnoticeable, and so goes my question -- can you think of an example when the result of one type is significantly different from another?
|
[
"https://stats.stackexchange.com/questions/526",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/-1/"
] | 4
|
HuggingFaceH4/stack-exchange-preferences
|
You can certainly get different results simply because you train on different examples. I very much doubt that there's an algorithm or problem domain where the results of the two would differ in some predictable way.
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 3