This is my intellectual property, I and I alone have full rights to it — well sort of. In the past, I have had more valuable ideas/methods/procedures taken by multi-national Investment Bankers and Actuarial Consulting firms. So the value of a couple of simple spreadsheet programs designed to help Athletics Directors’ is trivial in comparison.
These ideas and methods are now, and have been, in public domain; nothing all that exciting but I am proud of the observations I have made about the Bias in the rankings and using a ranking system to more objectively select the athlete of the day. So all I am saying is that it is far better to just use those ideas rather than claiming them as your own when using them. When you claim other people’s ideas as your own, you come off as a dick.
I made these Excel Programs to help other Athletics Directors; not to be exploited!!!! This is why things are NOW locked down verses earlier versions, which were freeware w/ no passwords.
However, the major reason why the Scoring Spreadsheet Program is locked down is errors. I will accept errors I made but if it is open source, I fear I will be blamed for errors caused by other people’s changes. There is less of a concern for the Judging Sheets Program and that is why it is mostly unprotected.
Additional Information for Scoring Spreadsheet Program
Being a numbers geek I have looked at Games data in some unique ways. I was the first to move away from NASGA’s style ranking (which is based on one set of records for all divisions) to a far better way of looking at results.
(1) There is nothing wrong with NASGA’s ranking. It just does not lend itself to easy interpretation to the uninitiated. Under NASGA’s system, an athlete who ties the North American Record in an event, receives 1,000 points for that event. The overall score used to rank athletes is the sum of the points earned for each event.
Ranking each event as a percentage of that event’s record has an intuitive interpretation. Throwing 90% of the record is far more intuitive than earning 900 points. Under this method, the overall score used to rank athletes is the average of each events percentage of that event’s record.
Under NASGA (decathlon) style ranking, a score of 8000 is 88.89% ( = 8000/1000/9) of the record set. Saying the thrower, on average, threw 89.89% of the records has more of an intuitive appeal than saying the thrower earned 8,000 pts, especially to the uninitiated.
(2) When comparing Athletes within a division, results should be expressed as a percentage of their division’s records (whether it be a field, state, country, …. records). When using a record set tied to the pool of athletes that generated those records, there is no bias generated in the rankings (no matter whether Decathlon or percentage of records ranking is used). This compares apples to apples and remove biases that can be generated by using another division’s records.
A Bias Example: NASGA uses North American Records as their record set and applies it to all throwing divisions. NASGA’s method generates known bias in Women and Lightweight rankings.
For instance, when the women’s division is ranked under NASGA’s method:
Thrower A who tied the women’s world record in HWFD would receive 1065.8 pts (106.58% of the men’s record).
Thrower B who tied the women’s Heavy hammer record would receive 749.33 pts (74.93% of the men’s heavy hammer record).
If these two throwers had mirror results in the other events, Thrower A would be ranked higher than Thrower B. Thus, the bias. They both tied the record in one event and all else is the same, then they should have the same ranking score.
NOTE 2: If the divisions records are greater than the North American Men’s records, then the event will be over-weighted by NASGA’s method; if the distance is less, the event will be under-weighted.
This idea has been adopted and is being used by a variety of invitational events/championships as it is the right way of doing things. No claims have been made and that is awesome.
(3) When comparing Athletes across divisions, the only objective way to do this is to compare their throws as a percentage of their divisions records (or decathlon scores based on the division records). Over the last decade plus, I have often suggested this as the best way to pick athlete of the day.
Here is an Example: at a particular event, four divisions where contested: Masters Men, Masters Women, Open and Lightweights. How do you compare the athletes across all the divisions? My solution for over a decade has been to express their results as a percentage of their record set and average that % over all events contested.
When this was done, the best Masters Women’s average was 92.0%, Masters Men was 90.5%, Open division thrower was 78.4% and the Lightweight threw an amazing 98.5% of Lightweight field records.
I would give the athlete of the day to the Lightweight.
Do you know anybody who is doing this and acting like he came up with it? I do!!!
(4) When comparing Athletes across time in a divisions, an objective comparison occurs when the throws are compared to the division’s record set on a point-in-time- basis or to a record set at a fixed point in time (such as the current record set). Each comparison would have different meaning.
* If you use the point-in-time calculation, you are comparing how well that athlete dominated in his time period.
* If you use the current records, you are comparing how well that athlete would compare to the current competition.
I have been doing this with the light weight national championship for over 5 years and it makes for some interesting comparison.
These things I have championed for a very very long time. They are being used by others and other championships. I have a sneaky suspension of the source of their comparisons. Imitation is the finest form of flattery — Who ever said that was full of dog excrement .
My scoring spreadsheet has championed this idea from almost the beginning (while before 2010).