Andrew Stokes asks whether students should be tested via their mobiles — and whether they should even know they are being assessed.

Andrew Stokes asks whether students should be tested via their mobiles — and whether they should even know they are being assessed.
Andrew Stokes looks at what goes on behind the scenes to ensure that a test is both fair and accurate.
In this interview, Adrian Raper talks to Richard Spiby from the British Council on CEFR and the importance of teaching to the test construct.
Your student took a placement test. The result was B2. You believe he is B1. What went wrong?
Subramoni Iyer describes how his experience with students and teachers in Syria led to localised test instructions and a pre-test video.
In this video, Andrew Stokes discusses how we can ensure that a placement test — or any other kind of language test — is culturally fair.
Surely the more questions you answer in a placement test, the more points you get and the higher your score? If you can’t finish, you can’t do yourself justice. And that must invalidate the result.
In the second of a series of short videos, testing expert Laura Edwards looks at the the roles of output and input in language testing.
Should a placement test include speaking and writing? Is it important that it is adaptive? Does a test-taker have to attempt every question? What, in fact is a placement test?
When Clarity and telc first conceptualised the Dynamic Placement Test, a key objective was to devise a democratic test — a computer-based level test available to schools whatever their digital setup. At the same time, we didn’t want to compromise on the technology: it needed to be a test that went well beyond multiple choice questions and gap fills. So within these constraints, the team prioritised three areas.