Face attributes estimations

Head pose

This estimator is designed to determine camera-space head pose. Since 3D head translation is hard to determine reliably without camera-specific calibration,only 3D rotation component is estimated. It estimate Tait–Bryan angles for head. Zero position corresponds to a face placed orthogonally to camera direction, with the axis of symmetry parallel to the vertical camera axis.

There are two head pose estimation method available: estimate by 68 face-aligned landmarks and estimate by original input image in RGB format. Estimation by image is more precise. If you have already extracted 68 landmarks for another facilities you may save time, and use fast estimator from 68landmarks.

Emotions

This estimator aims to determine whether a face depicted on an image expresses the following emotions:

  • Anger

  • Disgust

  • Fear

  • Happiness

  • Surprise

  • Sadness

  • Neutrality

You can pass only warped images with detected faces to the estimator interface. Better image quality leads to better results.

Emotions estimation presents emotions a snormalized float values in the range of [0..1] where 0 is lack of a specific emotion and 1st is the maximum intensity of an emotion.

Mouth state

This estimator is designed to determine smile/mouth/occlusion probability using warped image. Smile estimation structure consists of:

  • Smile score

  • Mouth score

  • Occlusion score

Sum of scores always equals 1. Each score means probability of corresponding state. Smile score prevails in cases where smile was successfully detected. If there is any object on photo that hides mouth occlusion score prevails. Mouth score prevails in cases where neither a smile nor an occlusion was detected.

Eyes estimation

This estimator aims to determine:

  • eye state: open, closed, occluded

  • precise eye iris location as an array of landmarks

  • recise eyelid location as an array of landmarks

Iris landmarks are presented with a template structure Landmarks that is specialized for 32points. Eyelid landmarks are presented with a template structure Landmarks that is specialized for 6points.

You can only pass warped image with detected face to the estimator interface. Better image quality leads to better results.

Note

Orientation terms “left” and “right” refer to the way you see the image as it is show non the screen. It means that left eye is not necessarily left from the person’s point of view, but is on the left side of the screen. Consequently, right eye is the one on the right side of the screen. More formally, the label “left” refers to subject’s left eye (and similarly for the right eye), such that xright<xleft.

Gaze direction estimation

This estimator is designed to determine gaze direction relatively to head pose estimation. Zero position corresponds to a gaze direction orthogonally to face plane, with the axis of symmetry parallel to the vertical camera axis

Note

Roll angle is not estimated, prediction precision decreases as a rotation angle increases.

Basic attribute estimation

The Attribute estimator determines face basic attributes. Currently, the following attributes are available:

  • age: determines person’s age

  • gender: determines person’s gender

  • ethnicity: determines ethnicity of a person

Before using attribute estimator, user is free to decide whether to estimateor not some specific attributes listed above through

Output structure, which consists of optional fields describing results of user requested attributes:

  • age is reported in years (float in range [0, 100])

  • for gender estimation 1 means male, 0 means female. Estimation precision in cooperative mode is 99.81% with the threshold 0.5. Estimation precision in non-cooperative mode is 92.5%.

  • ethnicity estimation returns 4 float normalized values, each value describes probability of person’s ethnicity. The following ethnicity’s are available:

    • asian

    • caucasian

    • african american

    • indian

Warp quality

This estimator aims to predict visual quality of an image. It is trained specifically on pre-warped human face images and will produce lower factor if:

  • Image is blurred;

  • Image is under-exposured (i.e., too dark);

  • Image is over-exposured (i.e., too light);

  • Image color variation is low (i.e., image is monochrome or close to monochrome).

The quality factor is a value in range [0..1] where 0 corresponds to low quality and 1 to high quality.

Approximate garbage score

This estimator aims to determine the quality of source input image suitable for later descriptor extraction and matching. AGS is a float in range [0..1] where 0 corresponds to low quality.

Face descriptor

Descriptor itself is a set of object parameters that are specially encoded. Descriptors are typically more or less invariant to various affine object transformations and slight color variations. This property allows efficient use of such sets to identify, lookup, and compare real-world objects’ images.

Descriptor extraction. Extraction is performed from object image areas around some previously discovered facial landmarks, so the quality of the descriptor highly depends on them and the image it was obtained from.

Face descriptor algorithm evolves with time, so newer FaceEngine versions contain improved models of the algorithm. Currently next versions are available: 46, 51, 52 and 54. Versions 54, 52 and 51 more precise then 46, but works very fast on GPU. Version 54 is the most precise.

Descriptor object stores a compact set of packed properties as well as some helper parameters that were used to extract these properties from the source image. Together these parameters determine descriptor compatibility. Not all descriptors are compatible to each other. It is impossible to batch and match in compatible descriptors, so you should pay attention what settings do you use when extracting them.

Classes and methods

Module with base classes of estimators and estimations

class lunavl.sdk.estimators.base_estimation.BaseEstimation(coreEstimation)[source]

Base class for estimation structures.

_coreEstimation

core estimation

asDict()[source]

Convert to a dict.

Returns

dict from luna api

Return type

Union[dict, list]

coreEstimation

Get core estimation from init :returns: _coreEstimation

class lunavl.sdk.estimators.base_estimation.BaseEstimator(coreEstimator)[source]

Base estimator class. Class is a container for core estimations. Mostly estimate attributes can be get through a corresponding properties.

_coreEstimator

core estimator

estimate(*args, **kwargs)[source]

Estimate attributes on warp.

Returns

estimated attributes

Return type

Any

Module contains a head pose estimator.

See head pose.

class lunavl.sdk.estimators.face_estimators.head_pose.FrontalType[source]

Enum for frontal types

BY_GOST = 'FrontalFace2'

GOST/ISO angles

FRONTAL = 'FrontalFace1'

Good for recognition; Doesn’t descrease recall and looks fine

TURNED = 'FrontalFace0'

Non-frontal face

class lunavl.sdk.estimators.face_estimators.head_pose.HeadPose(coreHeadPose)[source]

Head pose. Estimate Tait–Bryan angles for head (https://en.wikipedia.org/wiki/Euler_angles#Tait–Bryan_angles). Estimation properties:

  • pitch

  • roll

  • yaw

asDict()[source]

Convert angles to dict.

Returns

self.pitch, “roll”: self.roll, “yaw”: self.yaw}

Return type

{“pitch”

Return type

Dict[str, float]

getFrontalType()[source]

Get frontal type of head pose estimation.

Returns

frontal type

Return type

FrontalType

pitch

Get the pitch angle.

Returns

float in range(0, 1)

Return type

float

roll

Get the pitch angle.

Returns

float in range(0, 1)

Return type

float

yaw

Get the yaw angle.

Returns

float in range(0, 1)

Return type

float

class lunavl.sdk.estimators.face_estimators.head_pose.HeadPoseEstimator(coreHeadPoseEstimator)[source]

HeadPoseEstimator.

estimate(landmarks68)[source]

Realize interface of a abstract estimator. Call estimateBy68Landmarks

Return type

HeadPose

estimateBy68Landmarks(landmarks68)[source]

Estimate head pose by 68 landmarks.

Parameters

landmarks68 – landmarks68

Returns

estimate head pose

Raises

LunaSDKException – if estimation is failed

Return type

HeadPose

estimateByBoundingBox(detection, imageWithDetection)[source]

Estimate head pose by detection.

Parameters
  • detection – detection bounding box

  • imageWithDetection – image with the detection.

Returns

estimate head pose

Raises

LunaSDKException – if estimation is failed

Return type

HeadPose

Module contains an emotion estimator

See emotions.

class lunavl.sdk.estimators.face_estimators.emotions.Emotion[source]

Emotions enum

Anger = 1

Anger

Disgust = 2

Disgust

Fear = 3

Fear

Happiness = 4

Happiness

Neutral = 5

Neutral

Sadness = 6

Sadness

Surprise = 7

Surprise

class lunavl.sdk.estimators.face_estimators.emotions.Emotions(coreEmotions)[source]

Container for storing estimate emotions. List of emotions is represented in enum Emotion. Each emotion is characterized a score (value in range [0,1]). Sum of all scores is equal to 1. Predominate emotion is emotion with max value of score.

Estimation properties:

  • anger

  • disgust

  • fear

  • happiness

  • sadness

  • surprise

  • neutral

  • predominateEmotion

anger

Get anger emotion value.

Returns

value in range [0, 1]

Return type

float

asDict()[source]

Convert estimation to dict.

Returns

dict with keys ‘predominate_emotion’ and ‘estimations’

disgust

Get disgust emotion value.

Returns

value in range [0, 1]

fear

Get fear emotion value.

Returns

value in range [0, 1]

happiness

Get happiness emotion value.

Returns

value in range [0, 1]

neutral

Get neutral emotion value.

Returns

value in range [0, 1]

predominateEmotion

Get predominate emotion (emotion with max score value).

Returns

emotion with max score value

Return type

Emotion

sadness

Get sadness emotion value.

Returns

value in range [0, 1]

surprise

Get surprise emotion value.

Returns

value in range [0, 1]

class lunavl.sdk.estimators.face_estimators.emotions.EmotionsEstimator(coreEstimator)[source]

Emotions estimator.

estimate(warp)[source]

Estimate emotion on warp.

Parameters

warp – warped image

Returns

estimated emotions

Raises

LunaSDKException – if estimation failed

Return type

Emotions

Module contains a mouth state estimator

see mouth state

class lunavl.sdk.estimators.face_estimators.mouth_state.MouthStateEstimator(coreEstimator)[source]

Mouth state estimator.

estimate(warp)[source]

Estimate mouth state on warp.

Parameters

warp – warped image

Returns

estimated states

Raises

LunaSDKException – if estimation failed

Return type

MouthStates

class lunavl.sdk.estimators.face_estimators.mouth_state.MouthStates(coreEstimation)[source]

Mouth states. There are 3 states of mouth: smile, occlusion and neither a smile nor an occlusion was detected.

Estimation properties:

  • smile

  • mouth

asDict()[source]

Convert ot dict.

Returns

self.mouth, ‘occlusion’: self.occlusion, ‘smile’: self.smile}

Return type

{‘score’

mouth

Get mouth score value.

Returns

value in range [0, 1]

occlusion

Get occlusion score value.

Returns

value in range [0, 1]

smile

Get smile score value.

Returns

value in range [0, 1]

Return type

float

Module contains a mouth state estimator

See eyes and gaze direction.

class lunavl.sdk.estimators.face_estimators.eyes.Eye(coreEstimation)[source]

Eye structure.

Estimation properties:

  • eyelid

  • iris

asDict()[source]

Convert to dict.

Returns

self.irisLandmarks.asDict(), “eyelid_landmarks”: self.eyelidLandMarks.asDict(),

”state”: self.state.name.lower()}

Return type

{“iris_landmarks”

Return type

dict

eyelid

Get eyelid landmarks.

Returns

eyelid landmarks

Return type

EyelidLandmarks

iris

Get iris landmarks.

Returns

iris landmarks

Return type

IrisLandmarks

class lunavl.sdk.estimators.face_estimators.eyes.EyeEstimator(coreEstimator)[source]

Eye estimator.

estimate(transformedLandmarks, warp)[source]

Estimate mouth state on warp.

Parameters
  • warp – warped image

  • transformedLandmarks – transformed landmarks

Returns

estimated states

Raises

LunaSDKException – if estimation failed

Return type

EyesEstimation

class lunavl.sdk.estimators.face_estimators.eyes.EyeState[source]

Enum for eye states.

Closed = 3

eye is closed

Occluded = 2

eye is occluded

Open = 1

eye is opened

class lunavl.sdk.estimators.face_estimators.eyes.EyelidLandmarks(coreEyelidLandmarks)[source]

Eyelid landmarks.

class lunavl.sdk.estimators.face_estimators.eyes.EyesEstimation(coreEstimation)[source]

Eyes estimation structure.

leftEye

estimation for left eye

Type

Eye

rightEye

estimation for right eye

Type

Eye

asDict()[source]

Convert to dict.

Returns

self.leftEye, ‘pitch’: self.rightEye}

Return type

{‘yaw’

Return type

dict

class lunavl.sdk.estimators.face_estimators.eyes.GazeDirection(coreEstimation)[source]

Gaze direction structure. Estimation properties:

  • yaw

  • pitch

asDict()[source]

Convert to dict.

Returns

self.yaw, ‘pitch’: self.pitch}

Return type

{‘yaw’

Return type

dict

pitch

Get the pitch angle.

Returns

float in range(0, 1)

Return type

float

yaw

Get the yaw angle.

Returns

float in range(0, 1)

Return type

float

class lunavl.sdk.estimators.face_estimators.eyes.GazeEstimation(coreEstimation)[source]

Gaze estimation.

leftEye

left eye gaze direction

Type

GazeDirection

rightEye

right eye gaze direction

Type

GazeDirection

asDict()[source]

Convert self to a dict.

Returns

self.leftEye.asDict(), “right_eye”: self.rightEye.asDict()}

Return type

{“left_eye”

Return type

dict

class lunavl.sdk.estimators.face_estimators.eyes.GazeEstimator(coreEstimator)[source]

Gaze direction estimator.

estimate(headPose, eyesEstimation)[source]

Estimate a gaze direction

Parameters
  • headPose – head pose (calculated using landmarks68)

  • eyesEstimation – eyes estimation

Returns

estimated states

Raises

LunaSDKException – if estimation failed

Return type

GazeEstimation

class lunavl.sdk.estimators.face_estimators.eyes.IrisLandmarks(coreIrisLandmarks)[source]

Eyelid landmarks.

Module contains a basic attributes estimator.

See basic attributes.

class lunavl.sdk.estimators.face_estimators.basic_attributes.BasicAttributes(coreEstimation)[source]

Class for basic attribute estimation

age

age, number in range [0, 100]

Type

Optional[float]

gender

gender, number in range [0, 1]

Type

Optional[float]

ethnicity

ethnicity

Type

Optional[Ethnicities]

asDict()[source]

Convert to dict.

Returns

dict with keys “ethnicity”, “gender”, “age”

Return type

dict

class lunavl.sdk.estimators.face_estimators.basic_attributes.BasicAttributesEstimator(coreEstimator)[source]

Basic attributes estimator.

estimate(warp, estimateAge, estimateGender, estimateEthnicity)[source]

Estimate ethnicity.

Parameters
  • warp – warped image

  • estimateAge – estimate age or not

  • estimateGender – estimate gender or not

  • estimateEthnicity – estimate ethnicity or not

Returns

estimated ethnicity

Raises

LunaSDKException – if estimation failed

Return type

BasicAttributes

class lunavl.sdk.estimators.face_estimators.basic_attributes.Ethnicities(coreEstimation)[source]

Class for ethnicities estimation.

Estimation properties:

  • asian

  • indian

  • caucasian

  • africanAmerican

  • predominateEmotion

africanAmerican

Get african american ethnicity value.

Returns

value in range [0, 1]

asDict()[source]

Convert to dict.

Returns

dict in platform format

Return type

dict

asian

Get asian ethnicity value.

Returns

value in range [0, 1]

Return type

float

caucasian

Get caucasian ethnicity value.

Returns

value in range [0, 1]

indian

Get indian ethnicity value.

Returns

value in range [0, 1]

predominateEmotion

Get predominate ethnicity (ethnicity with max score value).

Returns

ethnicity with max score value

Return type

Ethnicity

class lunavl.sdk.estimators.face_estimators.basic_attributes.Ethnicity[source]

Enum for ethnicities.

AfricanAmerican = 1

african american

Asian = 2

asian

Caucasian = 4

caucasian

Indian = 3

indian

Module contains a basic attributes estimator.

See basic attributes.

class lunavl.sdk.estimators.face_estimators.basic_attributes.BasicAttributes(coreEstimation)[source]

Class for basic attribute estimation

age

age, number in range [0, 100]

Type

Optional[float]

gender

gender, number in range [0, 1]

Type

Optional[float]

ethnicity

ethnicity

Type

Optional[Ethnicities]

asDict()[source]

Convert to dict.

Returns

dict with keys “ethnicity”, “gender”, “age”

Return type

dict

class lunavl.sdk.estimators.face_estimators.basic_attributes.BasicAttributesEstimator(coreEstimator)[source]

Basic attributes estimator.

estimate(warp, estimateAge, estimateGender, estimateEthnicity)[source]

Estimate ethnicity.

Parameters
  • warp – warped image

  • estimateAge – estimate age or not

  • estimateGender – estimate gender or not

  • estimateEthnicity – estimate ethnicity or not

Returns

estimated ethnicity

Raises

LunaSDKException – if estimation failed

Return type

BasicAttributes

class lunavl.sdk.estimators.face_estimators.basic_attributes.Ethnicities(coreEstimation)[source]

Class for ethnicities estimation.

Estimation properties:

  • asian

  • indian

  • caucasian

  • africanAmerican

  • predominateEmotion

africanAmerican

Get african american ethnicity value.

Returns

value in range [0, 1]

asDict()[source]

Convert to dict.

Returns

dict in platform format

Return type

dict

asian

Get asian ethnicity value.

Returns

value in range [0, 1]

Return type

float

caucasian

Get caucasian ethnicity value.

Returns

value in range [0, 1]

indian

Get indian ethnicity value.

Returns

value in range [0, 1]

predominateEmotion

Get predominate ethnicity (ethnicity with max score value).

Returns

ethnicity with max score value

Return type

Ethnicity

class lunavl.sdk.estimators.face_estimators.basic_attributes.Ethnicity[source]

Enum for ethnicities.

AfricanAmerican = 1

african american

Asian = 2

asian

Caucasian = 4

caucasian

Indian = 3

indian

Module for estimate a warped image quality.

See warp quality.

class lunavl.sdk.estimators.face_estimators.warp_quality.Quality(coreQuality)[source]

Structure quality

Estimation properties:

  • dark

  • blur

  • gray

  • light

asDict()[source]

Convert to dict.

Returns

self.dark, “lightning”: self.light, “saturation”: self.gray, “blurness”: self.blur}

Return type

{“darkness”

Return type

Dict[str, float]

blur

Get blur.

Returns

float in range(0, 1)

Return type

float

dark

Get dark.

Returns

float in range(0, 1)

Return type

float

gray

Get gray.

Returns

float in range(0, 1)

Return type

float

light

Get light.

Returns

float in range(0, 1)

Return type

float

class lunavl.sdk.estimators.face_estimators.warp_quality.WarpQualityEstimator(coreEstimator)[source]

Warp quality estimator.

estimate(warp)[source]

Estimate quality from a warp.

Parameters

warp – raw warped image or warp

Returns

estimated quality

Raises

LunaSDKException – if estimation failed

Return type

Quality

Module contains an approximate garbage score estimator

See ags.

class lunavl.sdk.estimators.face_estimators.ags.AGSEstimator(coreEstimator)[source]

Approximate garbage score estimator.

estimate(detection=None, image=None, boundingBox=None)[source]

Estimate emotion on warp.

Parameters
  • image – image in R8G8B8 format

  • boundingBox – face bounding box of corresponding the image

  • detection – face detection

Returns

estimated ags, float in range[0,1]

Raises

LunaSDKException – if estimation failed

Return type

float

Module contains a face descriptor estimator

See face descriptor.

class lunavl.sdk.faceengine.descriptors.FaceDescriptor(coreEstimation, garbageScore=0.0)[source]

Descriptor

garbageScore

garbage score

Type

float

asBytes

Get descriptor as bytes.

Returns:

Return type

bytes

asDict()[source]

Convert to dict

Returns

Dict with keys “descriptor” and “score”

Return type

Dict[str, Union[float, bytes]]

asVector

Convert descriptor to list of ints :returns: list of ints.

Return type

List[int]

model

Get model of descriptor :returns: model version

Return type

int

rawDescriptor

Get raw descriptors :returns: bytes with metadata

Return type

bytes

class lunavl.sdk.faceengine.descriptors.FaceDescriptorBatch(coreEstimation, scores=None)[source]

Face descriptor batch.

scores

list of garbage scores

Type

List[float]

append(descriptor)[source]

Add descriptor to end of batch.

Parameters

descriptor – descriptor

Return type

None

asDict()[source]

Get batch in json like object.

Returns

list of descriptors dict

Return type

List[Dict[~KT, ~VT]]

class lunavl.sdk.faceengine.descriptors.FaceDescriptorFactory(faceEngine)[source]

Face Descriptor factory.

_faceEngine

faceEngine

Type

VLFaceEngine

generateDescriptor()[source]

Generate core descriptor

Returns

core descriptor

Return type

IDescriptorPtr

generateDescriptorsBatch(size)[source]

Generate core descriptors batch.

Parameters

size – batch size

Returns

batch

Return type

IDescriptorBatchPtr

Module contains a face descriptor estimator

See face descriptor.

class lunavl.sdk.estimators.face_estimators.face_descriptor.FaceDescriptorEstimator(coreExtractor, faceDescriptorFactory)[source]

Face descriptor estimator.

estimate(warp, descriptor=None)[source]

Estimate face descriptor from a warp image.

Parameters
  • warp – warped image

  • descriptor – descriptor for saving extract result

Returns

estimated descriptor

Raises

LunaSDKException – if estimation failed

Return type

FaceDescriptor

estimateDescriptorsBatch(warps, aggregate=False, descriptorBatch=None)[source]

Estimate a batch of descriptors from warped images.

Parameters
  • warps – warped images

  • aggregate – whether to estimate aggregate descriptor or not

  • descriptorBatch – optional batch for saving descriptors

Returns

tuple of batch and the aggregate descriptors (or None)

Raises

LunaSDKException – if estimation failed

Return type

Tuple[FaceDescriptorBatch, Optional[FaceDescriptor]]