New version of statistical analysis tool, Removes some limitations, normality assumption 
New version of statistical analysis tool, Removes some limitations, normality assumption 
Feb 3 2011, 20:54
Post
#1


Server Admin Group: Admin Posts: 4888 Joined: 24September 01 Member No.: 13 
Some time ago I needed an analysis of some test results and tried to use the bootstrap utility we have used for the listening tests. Unfortunately, the results coming out were bogus. I traced it down to an obscure 64bit compatibility issue, but going through the code some things bothered me. ff123 improved my initial version significantly, but one of the things that was done was to use a normal distribution approximation for test statistics. If you consider the original version of the utility was exactly written to avoid any assumptions about normality, that's a bit sad.
So I ended up rewriting the whole thing and fixing all outstanding issues. The new version:
This is new so it might still contain some bugs. Any feedback appreciated. Download page 


Jul 29 2011, 11:36
Post
#2


Group: Members Posts: 1 Joined: 29July 11 Member No.: 92637 
A quick question: why is it that the usual (binomial) pvalues for n trials and k successes are calculated as (in pseudoTeX notation): \sum_{i = 0}^k \choose{n}{i} p^i q^{ni} where p is the probability of success in a Bernoulli experiment and q = 1  p, instead of only: \choose{n}{k} p^i q^{ni} If the person correctly marked k of those trials are the "correct sample" and there are \choose{n}{k} possibilities given of choosing k from a row of n experiments, why are we summing for other values of k? 


Aug 23 2011, 11:37
Post
#3


Server Admin Group: Admin Posts: 4888 Joined: 24September 01 Member No.: 13 
Because we're interested in the odds that randomly picking will produce a score of k successes or more.



LoFi Version  Time is now: 4th March 2015  12:46 