On this page
- Code Challenge Styles
- Computer Science
- New Functionality
- Code Review
- Design Review
- UI Implementation
- Problem Solving
- Data Science
- Data Analysis
- Unit Test Construction
- Tech Support
- Usability Testing
- Q&A Challenge Styles
- Supplemental Knowledge
- Research Study
- Deductive Reasoning
- Self Report
Traditionally, coding challenges used in whiteboarding and other hiring contexts are algorithmic in nature. These styles tend to focus heavily on the implementation of a well-defined computer science algorithm.
Qualified was built with support for a wide variety of challenge paradigms beyond algorithms to more thoroughly assess skills needed on the job. Regardless of whether you are using Qualified for education, certification or recruitment, work samples form the basis of assessment on our platform.
This article describes many of the styles we've identified in our assessments.
Code Challenge Styles
The following styles are simply names we have given to different ways of utilizing the three challenge types on the Qualified platform to accomplish certain assessment outcomes with candidates.
Keep In Mind
Not all styles listed below are represented within the Qualified library. Qualified continues to experiment with and develop new styles of challenges to fit customer-specific needs. Please reach out to us if you are interested in learning more.
This article is a work in progress. Many of the style descriptions listed will eventually be expanded to include more content, such as examples and recommended techniques.
A generalized coding task, mainly focused on testing basic ability to problem solve and write code. This will often involve writing a single function from scratch which offers a utility or solves a simple problem that doesn't specifically involve a lot of computer science theory.
Heavy focus on computer science and implementing algorithms and specific data structures.
Real-world challenges that involve implementing new functionality within new or existing code.
Existing code base that needs to be simplified, updated to make room for new improvements or otherwise improved in some way. Unit tests can be used to ensure code still works as expected, with scoring being mostly done through objective rubrics.
Existing code base with correctness issues that need to be resolved. This challenge style involves understanding existing code before being able to make changes. Test cases are typically used to determine that bugs were fixed but manual reviews can play a role as well.
Existing code that only involves rubric-scored written requirements for leaving review feedback.
Skeleton code is provided that gives a strong sense of the system design. Code is not necessarily functional, and not expected to be, but instead used as an interactive diagram and discussion point to contribute deep system analysis. Scoring is accomplished using objective reviews.
A design is provided, and a candidate must implement the design so that it becomes a working application. This could be a design from scratch, or an existing design that needs to be updated. Scoring can be done using a combination of unit tests and objective rubrics.
A challenge with a specific heavy emphasis on solving a problem, with clear evidence of critical thinking being required. This is a challenge that goes beyond the normal level of problem solving skills required to be able to implement functionality in code. Puzzles or non-real world challenges that target similar skills such as abstract problem solving, mathematical thinking and reasoning are examples of this category.
Heavy focus on statistics, math, machine learning libraries, data munging and reporting.
A challenge which prompts a candidate to explore data and produce an analysis. This is an area where multiple challenges might work well together, for example, a coding and a Q&A challenge pair.
Unit Test Construction
Existing code that needs tests written for it. Using project challenges, candidate tests can be used to provide a score for the challenge. However, the score should be informative only; objective rubrics would ultimately be used to score the challenge.
Instead of focusing on changing code, the challenge focuses on responding to a customer issue, troubleshooting what the issue may be, and reporting back to either management or the customer what the issue is. Objective rubrics would be used to score the challenge.
A front-end challenge where no coding is required. Source code may be hidden and the candidate interacts with the application using the interface. The task is to log any usability issues or bugs found as support tickets. This would be objectively scored using rubrics as in any other written response challenge.
Q&A Challenge Styles
A set of questions designed to supplement a coding challenge. This strategy uses specific knowledge measurements to provide more depth to the profile. For example, after taking a React coding challenge, a quiz could be delivered which tests candidate knowledge of design decisions, common pitfalls, best practices or conceptual nuances of React. The goal is to fill in important gaps that may be difficult to test in a challenge.
A challenge that provides context for a problem with specific information that needs to be uncovered through research. Candidates will seek out and find specific knowledge on the internet and in documentation and complete written answers or multiple choice questions.
A research project could involve a candidate identifying existing software packages, libraries or applications available to them to implement a new set of functionality and weighing the merit of adopting an off-the-shelf solution versus developing one in-house.
A series of vocabulary/terminology questions which establish the expertise and understanding a candidate has in a subject.
A series of deductive reasoning questions which are in the form of specifying statements/rules and determining which choices are accurate based on those statements. This could also be in the form of reviewing code and indicating what a statement will do given certain parameters being passed in.
A set of questions asking candidates to report on aspects of themselves. This could be used for scoring job fit, culture fit, or asking candidates to rate their own skills. This can save time by letting candidates self-select out of skills or technologies that are essential for a position or can be a tool for identifying gaps that may be ideal for further evaluation.