Introduction of JS 3.0

March 31st, 2017

Dear Members, dear Adjudicators,
I am extremely pleased to report to you that the JS 3.0 is now ready and was successfully trialled before  being used for the Grand Slam Series in Helsinki.
On behalf of the WDSF Praesidium we thank you all for your feedback on the JS 2.1 and we appreciate your patience in developing a much more efficient JS 3.0.
In Sept 2015 I was tasked by the Managing Committee to engage an independent professional to look into the integrity of the JS 2.1 system after receiving feedback on the system. We appointed PHD, Dr Martyn Kibel of Australia who later submitted a very impressive report with sound recommendations. 
With the information produced in the report, the first prototype JS 3.0 was developed with the help of Mr Dallas Williams and used in a non WDSF event in Singapore in 2016 with a panel of well qualified judges. A post mortem was conducted after the event and additional modifications were made to the programme.
After the establishment of the WDSF Competition Manipulation Taskforce, the revamp of the JS 2.1 became a priority in Aug 2016. I was tasked to work on a new proposal and be ready for the next taskforce meeting in Nov 2016. At this meeting the new features of the proposed JS were approved for development.
Following this meeting a JS Technical Team as presented below was established to develop the new software:
1. Dallas William (Australia) – JS IT Team Leader
2. Olav Groen (Germany)
3. Stephan Rath (Germany)
I will provide you with some information on past shortcomings of JS2.1 and the outstanding and positive features of the JS 3.0 :
Negative aspects of JS 2.1
1. Easy to manipulate the result with smaller judging group. It takes 2 judges to collude to change or affect the final result
2. Frequent changes of judging components affects the concentration of judges
3. Itakes 1 to 3 judges to push for a higher or a lower overall score
4. Impossible to calculate manually
5. Imposing a “range” of scores that restrict judging
6. Often creates a big deviation in scores from round to round, comp to comp
7. Easy to manipulate with a group of 3 judges
8. Median from the 2nd score is inaccurate
9. Chairperson interferes with judging by setting the range
10. Drastic scoring changes in the final round to push the results of certain couples.
New Features of JS 3.0
1. Grouping of 12 judges into 2 groups of 6 judges
2. Twice as many Adjudicator Scores(6 scores per component) will be used to calculate the results.
3. In the 1/4 and 1/2 finals, 2 groups of judges are judging single component combination of TQ/PS or MM/CP. Technically TQ and PS is closely related so are MM with CP.
4. No change of components for the entire round allowing judges to focus on the criteria.
5. In the final, in the group dance, judges will again judge single component combination of TQ/PS or MM/CP. In Solo dance the judges will judge 2 components each. 
6. Use a median derived from a average score from 6 scores
7. Use a tolerance marking scale to eliminate possible manipulative scores. The tolerance range is set at 1.2 for GS and 1.5 for WO. This can be changed easily if necessary.
4. More accurate results with less possible distortions from 6 scores
5. More difficult to collude and effectively affect the overall result
6. Possible to calculate manually if computer fails
7. Prevent manipulative judging by eliminating the “fake” scores outside tolerance scales
8. Possible to work well with 10 judges
9. Charperson will not set the judging range.
10. Adding of 0.25 marking scales as reduction. For now 0.25 should be awarded in consideration of a reduction rather then addition. For example if a couple consistently presents the qualities of a 9, with slight occasional mistake or deficiency, can be awarded a 8.75. We will continue to develop judging standards to support these new scales.
According to reports from the panel of judges for the past 2 events using JS 3.0 they were very satisfied with the system. The responses from the athletes were also positive. We have practically removed all negative aspects of the JS 2.1 while continuing our efforts to improve on the JS 3.0.
For the Competition Manipulation Task Force, the judging software of JS 3.0 is only part of the system. The Technical team will develop the Adjudicators Assessment programme to provide accurate feedback to ensure the accountability of the assessment as awarded by the adjudicators.
With the introduction of the JS 3.0, there is NO need for adjudicators who passed the JS 2.1 Course to attend another course or take additional exams. The concept of absolute judgement remains the pillars behind the WDSF Judging systems. For standardisation the qualification of JS 2.1 or JS 3.0 will be standardised as AJS (Absolute Judgement System) and all future courses on the JS will be called Absolute Judgement Courses.
For your information Absolute Judgement is an IOC approved methodology as used by some artistic Olympic medal sports.  The use of this judging method was the reason that DanceSport was accepted in the Asian Games in 2010. It is very important that we continue to promote more objectivity in our judging system if we want to maintain recognition as a sport.  
Thank you for your support of the highly improved JS 3.0 system. This is a big step forward towards a better judging system. We welcome any constructive comments and suggestions to further improve the system. Please forward your suggestions to our JS IT Team leader Mr Dallas Williams at

Shawn Tay
WDSF General Secretary

Chairman, WDSF DanceSport Academy