G, or the test statistic, can be determined for a specific point by taking the deviation of that test value from the mean of all the values, and then divided by the standard deviation of all the values (s), including the test value. The specific entry to be tested should be the one that has the greatest distance from the average of all the values (indicated by the "max" in the equation above). Obviously, the first point we want to test as an outlier is the point furthest away from the rest of the points. Once we have calculated G, or our test statistic, we can compare it to the critical region to determine if the point being tested is an outlier.
The equation for the critical region relies on the number of points in the data sample (N) and an upper critical value of the t-distribution denoted by 't' (go ahead and ignore the subscript following the letter, it basically marks that the t-value is considered with N-2 degrees of freedom). We determine t by looking up its value on a t-distribution table for normal distribution, in respect to the degree of confidence we wish to have that a value above a calculated threshold is outside of normal distribution. If you wish to learn more about how exactly this works, please feel free to visit Wikipedia's article on t-distribution. But now we must press forward.
We calculate the critical region by plugging in the number of entries in our sample of data (N) and the t-distribution value (t) appropriate to the number of entries and degree of certainty of choosing (such as 95%). If our test statistic, G, is greater than the critical region, the value in question is an outlier (with 95% certainty in this case). Higher percentages of certainty used in determining the t-value will identify less points as outliers, and vice versa.
But enough math. Time to program.
(If you really are interested in the math, the Wikipedia articles on Grubbs' test and outliersare very informative.)