This algorithm is a legacy one. The API has changed since its implementation. New versions and forks will need to be updated.
This algorithm is splittable

Algorithms have at least one input and one output. All algorithm endpoints are organized in groups. Groups are used by the platform to indicate which inputs and outputs are synchronized together. The first group is automatically synchronized with the channel defined by the block in which the algorithm is deployed.

Group: probes

Endpoint Name Data Format Nature
keystroke system/kboc16_keystroke/1 Input
file system/uint64/1 Input
client_id system/text/1 Input
score_file robertodaza/competition_kboc16/1 Output

Group: templates

Endpoint Name Data Format Nature
features system/kboc16_keystroke/1 Input
id system/text/1 Input

Algorithms may use functions and classes declared in libraries. Here you can see the libraries and import names used by this library. You don't need to import the library manually on your code, the platform will do it for you. Just use the object as it has been imported with the selected named. For example, if you choose to import a library using the name lib, then access function f within your code like lib.f().

Library Import as
robertodaza/kboc16_baseline_matchers/5 kboc16_baseline_matchers

The code for this algorithm in Python
The ruler at 80 columns indicate suggested POSIX line breaks (for readability).
The editor will automatically enlarge to accomodate the entirety of your input
Use keyboard shortcuts for search/replace and faster editing. For example, use Ctrl-F (PC) or Cmd-F (Mac) to search through this box

Keystroke Biometric Ongoing Competition (KBOC) is an official competition of the IEEE Eighth International Conference on Biometrics: Theory, Applications, and Systems (BTAS 2016) organized by ATVS Biometric Research Group.

Participant Block: this code (in Python) comprises the evaluation block of the KBOC16 competition.

The genuine and impostor samples are unknown except for the training samples (first 4 samples). In order to avoid overtifing of the systems and any possible misconduct, the performance evaluation is made over 100 of the 300 users. This first 100 users are representative of the complete set of 300 users. As an example, the difference between the performance of the baseline algorithms is less than 1%. The evaluation over the 300 users will be done during the final weeks of the competition. Together with this block, you can access the library kboc16_baseline_matchers (robertodaza/kboc16_baseline_matchers/5) with 3 baseline systems (see the examples below).

HOW TO PARTICIPATE: participants can modify the code of this algorithm to include their keystroke recognition systems. It is allow the use of libraries and toolboxes out of the included in this example. The participant code could be private while its results should be available for the competition organizers (in order to include it in the final competition report).

IMPORTANT: Please replace the line 108 score=1/(d+0.001) by score=-d. In some cases (with large dynamic margin between scores), the inverse of the distance can be problematic.

Experiments

This table shows the number of times this algorithm has been successfully run using the given environment. Note this does not provide sufficient information to evaluate if the algorithm will run when submitted to different conditions.

Terms of Service | Contact Information | BEAT platform version 1.3.1rc1 | © Idiap Research Institute - 2013-2019