Welcome to the documentation!#
Gazetimation provides an out of the box solution for gaze estimation.
Installation#
pip install gazetimation
Usage#
1from gazetimation import Gazetimation
2gz = Gazetimation(device=0) # or any other device id
3gz.run()
To run a video file
gz.run(video_path='path/to/video')
To save as a video file
gz.run(video_output_path='path/to/video.avi')
The run
method also accepts a handler function for further processing.
gz.run(handler=my_handler)
Attention
The handler function will be called by passing the frame and the gaze information
if handler is not None:
handler([frame, left_pupil, right_pupil, gaze_left_eye, gaze_right_eye])
The solution can be customized by passing parameters to the Gazetimation
constructor, and to the run
method.
Let’s take a look at the Gazetimation
constructor.#
- gazetimation.Gazetimation.__init__(self, face_model_points_3d: Optional[ndarray] = None, left_eye_ball_center: Optional[ndarray] = None, right_eye_ball_center: Optional[ndarray] = None, camera_matrix: Optional[ndarray] = None, device: int = 0, visualize: bool = True) None
Initialize the Gazetimation object.
This holds the configurations of the Gazetimation class.
- Parameters:
face_model_points_3d (np.ndarray, optional) –
Predefine 3D reference points for face model. Defaults to None.
Note
If not provided, it will be assigned the following values. And, the passed values should conform to the same facial points.
self._face_model_points_3d = np.array( [ (0.0, 0.0, 0.0), # Nose tip (0, -63.6, -12.5), # Chin (-43.3, 32.7, -26), # Left eye, left corner (43.3, 32.7, -26), # Right eye, right corner (-28.9, -28.9, -24.1), # Left Mouth corner (28.9, -28.9, -24.1), # Right mouth corner ] )
left_eye_ball_center (np.ndarray, optional) –
Predefine 3D reference points for left eye ball center. Defaults to None.
Note
If not provided, it will be assigned the following values. And, the passed values should conform to the same facial points.
self._left_eye_ball_center = np.array([[29.05], [32.7], [-39.5]])
right_eye_ball_center (np.ndarray, optional) –
Predefine 3D reference points for right eye ball center. Defaults to None.
Note
If not provided, it will be assigned the following values. And, the passed values should conform to the same facial points.
self._right_eye_ball_center = np.array([[-29.05], [32.7], [-39.5]])
camera_matrix (np.ndarray, optional) –
Camera matrix. Defaults to None.
Important
if not provided, the system tries to calculate the camera matrix using thefind_camera_matrix method
.This calculated camera matrix is estimated from the width and height of the frame, it’s not an exact solution.device (int, optional) –
Device index for the video device. Defaults to 0.
Attention
if a negative device index is provided, the system tries to find the first available video device index using the
find_device method
. So, if not sure, pass device = -1.if device < 0: self._device = self.find_device() else: self._device = device
visualize (bool, optional) – If visualize is true then it shows annotated images. Defaults to True.
Important
Let’s go through the run method
.#
- gazetimation.Gazetimation.run(self, max_num_faces: int = 1, video_path: Optional[str] = None, smoothing: bool = True, smoothing_frame_range: int = 8, smoothing_weight='uniform', custom_smoothing_func=None, video_output_path: Optional[str] = None, handler=None)
Runs the solution
- Parameters:
max_num_faces (int, optional) – Maximum number of face(s)/people present in the scene. Defaults to 1.
video_path (str, optional) – Path to the video. Defaults to None.
smoothing (bool, optional) – If smoothing should be performed. Defaults to True.
smoothing_frame_range (int, optional) – Number of frame to consider to perform smoothing.. Defaults to 8.
smoothing_weight (str, optional) – Type of weighting scheme (“uniform”, “linear”, “logarithmic”). Defaults to “uniform”.
custom_smoothing_func (function, optional) – Custom smoothing function. Defaults to None.
video_output_path (str, optional) – Output path and format for output video.
handler (function, optional) –
If provided the output is passed to the handler function for further processing.
Attention
The handler will be called by passing the frame and the gaze information as shown below
if handler is not None: handler([frame, left_pupil, right_pupil, gaze_left_eye, gaze_right_eye])
Attention
If you’re not sure about how many people (faces) are present in the scene you can use the find_face_num
method.
Issues#
If any issues are found, they can be reported here.
License#
This project is licensed under the MIT license.