On this page
It is important to properly plan your assessment process to maximize the value you can extract from it across its uses. In this article we will describe some of the considerations that can help you construct well-crafted assessments. This information will be relevant regardless of whether you are planning to use pre-built challenges or looking to build your own.
Establish Audience and Goals
Establishing and keeping an audience and goals in mind guides the assessment planning process. Since assessments can be designed to fulfill a wide range of goals and target many different audiences, our recommendations for building an assessment come with the caveat that they may not be appropriate for all use cases.
An example audience and goal for an assessment could be in a hiring use case targeting a senior-level data scientist with the goal of assessing their ability to fill a particular role.
Another example might be directing an asssesment towards an audience of current employees with the goal of upskilling in a particular technology.
In other educational settings, a bootcamp may provide an assessment to an audience of students completing a module with the goal of verifying their acquisition of skill in a particular technology stack.
Time is a key factor when building an assessment. Time imposes limits on both breadth and depth of what you can expect to test and what might be in scope for a candidate to realistically complete.
It takes candidates a good deal of time to complete coding challenges (usually more than you may anticipate), gain familiarity with the platform and write thoughtful written responses. With that in mind, we recommend erring on the side of allowing more time rather than less and being judicious about the challenges you include in your assessment.
For pre-built library content, we offer rough time estimates to help guide your challenge selection process. You can place hard or soft time limits on candidates. Soft limits can ease candidate stress, but hard limits with a generous time allotment may not prove to be much more of a stress on candidates and can help timebox sessions, potentially increasing the comparability of candidates.
In some cases, splitting an assessment into multiple rounds or stages give needed downtime to potential hires and ultimately result in greater signal than one monolithic assessment might.
Take-home challenges can also be delivered using our platform, allowing students or candidates days to work through a project or a series of challenges.
For a typical pre-screen challenge, we recommend keeping the length to no more than an hour, with the potential for candidates to finish within half an hour.
For a typical full-hire challenge, a 180-minute assessment is generally a reasonable baseline target to aim for. Assessments longer than this would likely be considered a take-home.
When building an assessment, a typical approach involves increasing difficulty incrementally throughout the assessment from one challenge to the next.
In most assessments, it's a reasonable idea to begin with a brief warmup challenge to familiarize candidates with the workflow and platform and give them a chance to ease into the environment.
Providing a simple challenge up-front can get candidate confidence flowing and provide an easy victory, giving a psychological boost that can carry through to improve completion rates for more demanding challenges later in the assessment. This can be particularly useful in tense hiring scenarios when candidates will be under abnormal pressure.
A typical goal when creating an assessment is to avoid redundant challenges that might, for example, test small variations on the same few skills without adding interesting signal or depth. This rule of thumb is even more important bearing in mind the time limitations described above.
Using diverse challenges that examine unique facets of a candidate's skillset in a technology go a long way to increasing your confidence in the validity of an assessment. Diverse challenge selection can also alleviate feelings of monotony and help boost candidate satisfaction and improve completion rates.
Building the assessment using a mixture of challenge styles is an easy way to increase topical coverage and avoid redundancy. Try mixing classic code challenges, project code challenges and Q&A challenges in the same assessment. Mixing a debugging-oriented challenge with an existing code base alongside a greenfield coding challenge can provide greater signal than two greenfield coding challenges, as another example.