How long will the assessments take to complete?
Small self-assessment surveys (e.g. 1-5 people) can usually be completed within five business days. We recommend two weeks for larger self-assessment surveys, as you need to allow for participants who are on leave, off sick, at training courses, etc.
180° or 360° assessment surveys generally require two to four weeks. It is your responsibility to ensure that all participant details are provided (in the format needed) as early as possible, otherwise you run the risk that assessments will not be completed in time. Make this a priority.
What do I need to think about when choosing a start date?
This question has several factors to consider:
- To ensure that all your valued clients receive a reliable, high quality service, it is ideal to process the reports for each assessment survey in a single batch. This means that if the deadline needs to be extended for one or more participants to complete their feedback, the printing and delivery of the entire batch of reports will be delayed.
- Although 360° assessment surveys can take up to four weeks to complete, we recommend setting the survey start date to be one week earlier than required, which leaves one week for extensions if they are necessary.
- Printing and delivery of reports typically takes 3-4 days depending on locations, so we recommend allowing time for this step as well.
By way of example, we would recommend the following time line for a 360° assessment survey:
- Create assessment survey with a deadline of three weeks – Welcome emails will advertise an appropriate survey close date.
- One-week contingency – so if anybody fails to provide feedback within the initial assessment period the survey can be extended.
- 2-3 days for printing and delivery of the reports.
Development Tips workbooks must be included with the reports.
Introduction modules are also complimentary and can be used prior to the assessment commencing. Certified Practitioners know how to deliver these sessions.
Check the Member portal for some other helpful resources such as preliminary email template and alternative email communications.
How is 'consistency' calculated?
The mental conundrum that people have with consistency scores is, “How can there be low consistency, ie. high variation, when the average scores are all relatively high?”. The explanation is actually quite simple. Basically, you can have a number of items where the scores are quite far apart, but the average still ends up being close to four.
Consider the small example below:
|Question||Score 1||Score 2||Average|
As you can see in this example, there is quite a large amount of variation in the scores, but the average is still four, which is relative high. Also, the consistency score that is depicted in the reports is a benchmarked score, meaning that the graph is not depicting how consistent the scores are, but how consistent they are relative to the rest of the norm group. Basically, when we calculate benchmarks, we also calculate the consistency score for every single set of responses that is included in the norm group and then create a percentile map based on those scores, which is what we use to figure out the benchmarked score for consistency of responses for an individual report.
Genos Surveys Administration
Can I unlock an old survey?
This issue of unlocking old surveys is complicated and we usually don’t do it for the following reasons:
- First and foremost, an EI report should represent a snapshot in time. That is, the data collection window should ideally be as narrow as possible. In practice, a week or two is fine, but if a month of more has passed then it is not inconceivable that some event or interaction may have occurred that potentially skews the responses of the late rater(s) one way or the other.
- Rater privacy can be compromised if two versions of a report are provided to a participant, as it allows the participant to do a side by side comparison between the two reports and figure out what the incremental impact was of the late rater(s). Given that the participant often knows who the later rater(s) were, their confidentiality ends up being compromised as a result. As our entire business rests upon our ability to get people to respond to surveys on behalf of other people, maintaining our credibility by ensuring their privacy is paramount.
- The process itself can be logistically complicated, especially in larger surveys, as you need to ensure that all of the reports for which you do not want to collect further data are locked so that none of the outstanding raters for those groups is sent an email.
If we knew that people fully understood all of the nuances around unlocking older surveys then we would allow everybody to do it, but as many people are not very details-oriented and often prioritise the demands of their client over everything else, we currently feel that opening this ability up to the network could be harmful.