# FaceAPI Tutorial ## Features * Face Recognition * Face Landmark Detection * Face Expression Recognition * Age Estimation & Gender Recognition
## Table of Contents * **[Usage](#getting-started)** * **[Loading the Models](#getting-started-loading-models)** * **[High Level API](#high-level-api)** * **[Displaying Detection Results](#getting-started-displaying-detection-results)** * **[Face Detection Options](#getting-started-face-detection-options)** * **[Utility Classes](#getting-started-utility-classes)** * **[Other Useful Utility](#other-useful-utility)** * **[Available Models](#models)** * **[Face Detection](#models-face-detection)** * **[Face Landmark Detection](#models-face-landmark-detection)** * **[Face Recognition](#models-face-recognition)** * **[Face Expression Recognition](#models-face-expression-recognition)** * **[Age Estimation and Gender Recognition](#models-age-and-gender-recognition)** * **[API Documentation](https://justadudewhohacks.github.io/face-api.js/docs/globals.html)**


## Getting Started ### Loading the Models All global neural network instances are exported via faceapi.nets: ```js console.log(faceapi.nets) // ageGenderNet // faceExpressionNet // faceLandmark68Net // faceLandmark68TinyNet // faceRecognitionNet // ssdMobilenetv1 // tinyFaceDetector // tinyYolov2 ``` To load a model, you have to provide the corresponding manifest.json file as well as the model weight files (shards) as assets. Simply copy them to your public or assets folder. The manifest.json and shard files of a model have to be located in the same directory / accessible under the same route. Assuming the models reside in **public/models**: ```js await faceapi.nets.ssdMobilenetv1.loadFromUri('/models') // accordingly for the other models: // await faceapi.nets.faceLandmark68Net.loadFromUri('/models') // await faceapi.nets.faceRecognitionNet.loadFromUri('/models') // ... ``` In a nodejs environment you can furthermore load the models directly from disk: ```js await faceapi.nets.ssdMobilenetv1.loadFromDisk('./models') ``` You can also load the model from a tf.NamedTensorMap: ```js await faceapi.nets.ssdMobilenetv1.loadFromWeightMap(weightMap) ``` Alternatively, you can also create own instances of the neural nets: ```js const net = new faceapi.SsdMobilenetv1() await net.loadFromUri('/models') ``` You can also load the weights as a Float32Array (in case you want to use the uncompressed models): ```js // using fetch net.load(await faceapi.fetchNetWeights('/models/face_detection_model.weights')) // using axios const res = await axios.get('/models/face_detection_model.weights', { responseType: 'arraybuffer' }) const weights = new Float32Array(res.data) net.load(weights) ``` ### High Level API In the following **input** can be an HTML img, video or canvas element or the id of that element. ``` html