Hello dear writers, I hope this blog post finds you well!
This year, we've talked a little about how we're restructuring the feedback system. And you'll be glad to know that now we can finally tell you how, why, and what that means moving forward.
We gathered up all the feedback we had last year, and we started to see a pattern forming. The feedback we were providing to you, while great, and full of positivity, was lacking in structure, and for the most part, wasn't as critical, or explanatory as you'd liked.
We tasked our judges with writing hundreds of words, and it was a really tall order. It meant that they were under lots of pressure to get the reading done, and we felt that it was counterproductive to keep that practice moving forward. So, in 2019, we changed things up to refine our processes and deliver more structured, consistent feedback to you, while alleviating the pressure on our judges, giving them more time to enjoy the reading!
With this new system, we've been able to give the judges more time to read and consider your work, by using a series of graded metrics to order the field. This means that your work was scored on a scale in five core aspects we consider to encompass 'writing'. These are Spelling & Grammar, Style, Concept, Composition, and Realisation.
When the Judge reading your work comes to grade it, they use a Likert-type model in order to place your work within the field. Each score then corresponds to a section of pre-generated feedback which applies to your work.
Doing this gives context to your scores, and provides you with an insight into how the judge viewed it. While this does lose some of the 'personal touch', it not only allows us to provide more feedback overall to you, but also ensures consistency across both our judges, and genres, something that we've always been cautious of. How do two different judges view the same piece? How does the same judge view pieces in different genres? These are questions that face other competition organisers, as well as ours, and they are ones we're working hard to address.
You'll also find, within your Performance Report, a section entitled 'Judge's Comments', which is personalised feedback about your work written by them. Some things the judges liked, and some things to be improved. And, while this is shorter than last year, we're able to adhere to a much more consistent way of providing feedback that ensures quality control throughout the judging period.
We're going to continue to reiterate on and fine tune this system moving forward, but we feel that it's a strong step in the right direction, and can be modified to continually provide more insight into what goes on during the judging periods here at Grindstone.
We'd love to hear back from you about our Performance Reports, about what you think, and about how we might be able to improve them.
Of course, it's not as simple as flicking a switch and getting things done, but we always want to feel connected to you, the entrants, and to mould Grindstone around your suggestions.
And remember, while we know we're not perfect, we're providing feedback for every single entrant, and we're still pretty new at it! Most other competitions don't provide feedback at all, let alone a breakdown of how you performed in the competition. It's what sets Grindstone apart.
We're trying to do things differently here, and forge a new path. It's bumpy, sure, but we're making progress, and we hope you like the direction we're heading in.
Grindstone is a competition provider that cares about its entrant, cares about its judges, and more importantly, cares about the integrity of the concept! Competitions need to be fair, and unbiased, and we're moving towards that. We're committed to giving every piece we read the time and consideration it deserves, as well as providing explanations and context for the placement.
Thanks so much for your support so far. We love your writing, and we thank you all for giving Grindstone the chance to be as great as we know it can be.
See you soon,
PS. Oh, and of course... Keep writing!