Thesis benchmarkHuman input behavior

Research Overview

This thesis project looks at how people aim in FPS-style tasks, and whether those movement patterns can help separate real controller play from mouse input that has been translated to look like a controller. In simple terms, I am trying to study how the aim actually behaves, not just which device a session claims to use.

The benchmark is designed as a short, repeatable task with fixed durations and consistent goals. That makes it easier to compare players, runs, and device modes in a structured way while still preserving realistic moment-to-moment behavior.

Behavior over claims

The benchmark focuses on how aiming actually unfolds over time. I care less about what a device claims to be, and more about whether the movement profile looks human, controller-native, or translated from another input source.

Fair-play motivation

Some adapters can make mouse input resemble controller input, which can create aim-assist advantages in games built around fair controller play. The goal here is to understand those patterns better without relying only on whatever device label gets reported.

Rich telemetry

Sessions record score progression, hits, misses, timing, input traces, and key gameplay events in small batches. That means even abandoned or partial runs still contribute useful evidence for later analysis.

What is collected during a session

Performance outcomes

Score, shots fired, shots hit, accuracy, and benchmark mode length.

Input traces

Mouse movement, clicks, keys, controller sticks, triggers, and button activity depending on the run type.

Context and events

Pause/fullscreen transitions, movement-zone events, target interactions, timestamps, session identifiers, and related metadata.

How the data is used

  • Compare aim behavior across mouse/keyboard and controller sessions.
  • Measure how movement evolves through a run, not just the final score.
  • Build figures and exportable datasets for notebook analysis and thesis reporting.
  • Train and evaluate models that can separate likely controller-native behavior from translated input.

Why partial runs still matter

Data is uploaded continuously while you play. If you stop early, everything that was already uploaded is kept and the session is marked as abandoned instead of being deleted. That makes interruptions and incomplete runs part of the dataset instead of hiding them.

How controller spoofing / translated input fits in

A grounded version of the problem this benchmark is trying to study.

1. Real input starts somewhere

A player might physically use a mouse, keyboard, or controller. That raw movement has its own texture: mouse movement tends to be sharp and high-frequency, while controller movement tends to be smoother and bounded by stick mechanics.

2. An adapter can translate it

Some devices sit between the real input source and the game, then convert mouse movement into controller-like stick signals. To the game, that can look like controller input even though the physical movement started as mouse input.

3. The behavior can still leave traces

Even when the reported device says controller, the resulting aim path may still carry mouse-like traits. This benchmark is meant to collect enough time-series detail to study that gap between reported device and observed behavior.

Plain-language summary

You play a short aiming benchmark. The platform records how your input and performance change over time, then I use those patterns to study fair-play questions around controller behavior, translated input, and reproducible gameplay telemetry. The goal is to make it easier to reason about suspicious input behavior without reducing everything to a single score or a single device label.