The setup of our calibration system. This is a crucial part of the gaze tracker in order to map your eyes to the screen.
calibrationPoints:
Object
Stores the number of clicks for each calibration point
totalPointsCalibrated:
number
Number of points calibrated (points clicked 5 times)
userGazePoints:
Object
Stores mouse and pupil coordinates when user clicks on each of the nine points
startCalibration()
Initialises the 9 calibration points and the start calibration button. When
clicked, drawCalibrationPoints()
is executed.
None
None
drawCalibration()
Draws the 9 calibration points on the screen. The user
will have to click a point 5 times in order to collect their gaze data which is
stored in userGazePoints
.
None
None
calibrateAllPoints(event)
Calibrates all points and gets pupil coordinates and stores them alongside the
mouse position in
userGazePoints
. By default the pupil coordinates are extracted from
croppedCanvasLeft
.
event:
Event
None
storePupilCoordinates(PointID, pupilX, pupilY)
Helper function to store an individual pupil's coordinates into
userGazePoints
.
PointID:
string
pupilX:
number
pupilY:
number
None
Initialises various html elements including the video feed, settings menu and canvases.
video:
HTMLElement
Main video that catches users face and display it on the screen.
croppedCanvasLeft:
HTMLElement
cropped out eye image (black and white)
grayscaleCanvas:
HTMLElement
cropped out eye image (grayscale)
clearCanvas(canvas)
clears the HTML canvas
canvas:
HTMLElement
None
Extracts the eyes from face api's landmark detection.
drawCroppedCanvases(detections, leftCanvas)
Draws a cropped canvas for the left eye.
detections:
any
leftCanvas:
HTMLElement
none
calculateStartAndDistance(eye, padding)
Finds the starting and ending x, y coordinates of a bounding box around the eyes.
eye:
Object
padding:
number
x, y
x, y value of the left eye
This file contains openCV methods that apply filters on our canvas to make it easier to extract pupil datta
applyImageProcessing(canvas, debugCanvas)
Converts the image from canvas
to grayscale and applies it to both
canvases.
canvas:
HTMLElement
debugCanvas:
HTMLElement
None
Extraction of pupils from a cropped greyscaled canvas.
originalGrayScaleData:
Array
pimX:
number
pimY:
number
evaluateIntensity(canvas, intensityThreshold)
Changes the intensity of each pixel in the canvas.
canvas:
HTMLElement
intensityThreshold:
number
None
applyMinimumFilter(canvas)
Applies minimum filter to a specified canvas.
canvas:
HTMLElement
None
getPMIIndex(canvas)
Calculates the PMI (pixel with minimum intensity) of canvas
.
canvas:
HTMLElement
pmiIndex: number
index of the PMI from
originalGrayScaleData
.
getPupils(canvas, pmiIndex)
Calculates pixel position of the pupil based on PMI
canvas:
HTMLElement
pmiIndex:
number
number: [x,y]
x and y pupil coordinates.
drawPupilRegion(canvas, pupilX, pupilY)
Draws graphic on pupil.
canvas:
HTMLElement
pupilX:
number
pupilY:
number
None
getAverageIntensity(grayScaleMatrix, x, y, size, canvas)
Calculates the average intensity in a (size
x size
)
pixel square around the PMI
grayscaleMatrix:
Array
x:
number
y:
number
size:
number
canvas:
HTMLElement
number:
Average intensity around the PMI
Extraction of pupils from a cropped greyscaled canvas.
getPositions(pupilX, pupilY)
Returns relevant information regarding user's gaze and boundaries based off calibration.
pupilX:
number
pupilY:
number
number: cursorX, cursorY, screenWidth, screenHeight, yScreenStart, xScreenStart
user's gaze position and screen coordinates and dimensions
drawMapping(canvas, cursorX, cursorY, screenWidth, screenHeight, yScreenStart, xScreenStart)
Draws the user's gaze as a circle on the browser.
canvas:
HTMLElement
cursorX:
number
cursorY:
number
screenWidth:
number
screenHeight:
number
yScreenStart:
number
xScreenStart:
number
None
getAveragePoints(key)
Extracts average points from userGazePoints
.
key:
string
number: [][]
list of averaged coordinates