Skip to content
This repository was archived by the owner on Jul 13, 2023. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -10,3 +10,4 @@ system-test/*key.json
package-lock.json
.vscode
__pycache__
*.code-workspace
362 changes: 362 additions & 0 deletions samples/analyze.v1p3beta1.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,362 @@
// Copyright 2020 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

'use strict';

async function detectPerson(path) {
//[START video_detect_person_beta]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

// [START

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add space

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

// Imports the Google Cloud Video Intelligence library + Node's fs library
const Video = require('@google-cloud/video-intelligence').v1p3beta1;
const fs = require('fs');
// Creates a client
const video = new Video.VideoIntelligenceServiceClient();

/**
* TODO(developer): Uncomment the following line before running the sample.
*/
// const path = 'Local file to analyze, e.g. ./my-file.mp4';

// Reads a local video file and converts it to base64
const file = fs.readFileSync(path);
const inputContent = file.toString('base64');

const request = {
inputContent: inputContent,
features: ['PERSON_DETECTION'],
videoContext: {
personDetectionConfig: {
// Must set includeBoundingBoxes to true to get poses and attributes.
includeBoundingBoxes: true,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You don't use the bounding boxes in the output, is it worth addding?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The bounding boxes are necessary to get the person and face detection segments but don't need to be read.

Reading bounding boxes is a pretty common task for the Video Intelligence API. On one hand, developers should be familiar with it already and this sample is really more about the attributes. On the other hand, it is a common task for using the API and maybe a good thing to include anyway.

I don't have a strong feeling one way or the other about bounding boxes, besides that adding them makes this sample longer. What is your preference?

includePoseLandmarks: true,
includeAttributes: true,
},
},
};
// Detects people in a video
const [operation] = await video.annotateVideo(request);
const results = await operation.promise();
console.log('Waiting for operation to complete...');

// Gets annotations for video
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a note that we get the first result because only 1 video is processed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

const personAnnotations =
results[0].annotationResults[0].personDetectionAnnotations;

for (const {tracks} of personAnnotations) {
console.log('Person detected:');
for (const {segment, timestampedObjects} of tracks) {
if (segment.startTimeOffset.seconds === undefined) {
segment.startTimeOffset.seconds = 0;
}
if (segment.startTimeOffset.nanos === undefined) {
segment.startTimeOffset.nanos = 0;
}
if (segment.endTimeOffset.seconds === undefined) {
segment.endTimeOffset.seconds = 0;
}
if (segment.endTimeOffset.nanos === undefined) {
segment.endTimeOffset.nanos = 0;
}
Comment on lines +58 to +69
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@telpirion, does this ever happen?
If so, that seems like an issue we need to raise.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It can happen, yes. Sometimes the result from the API simply doesn't return a value and thus the property is undefined.

TBH, I'm following the pattern demonstrated here (among other places):
https://github.com/googleapis/nodejs-video-intelligence/blob/master/samples/analyze.v1p2beta1.js#L42-L61

console.log(
`\tStart: ${segment.startTimeOffset.seconds}.` +
`${(segment.startTimeOffset.nanos / 1e6).toFixed(0)}s`
);
console.log(
`\tEnd: ${segment.endTimeOffset.seconds}.` +
`${(segment.endTimeOffset.nanos / 1e6).toFixed(0)}s`
);

// Each segment includes timestamped objects that
// include characteristic--e.g. clothes, posture
// of the person detected.
const [firstTimestampedObject] = timestampedObjects;

// Attributes include unique pieces of clothing,
// poses, or hair color.
for (const {name, value} of firstTimestampedObject.attributes) {
console.log(`\tAttribute: ${name}; ` + `Value: ${value}`);
}

// Landmarks in person detection include body parts.
for (const {name, point} of firstTimestampedObject.landmarks) {
console.log(`\tLandmark: ${name}; Vertex: ${point.x}, ${point.y}`);
}
}
}
// [END video_detect_person_beta]
}
async function detectPersonGCS(gcsUri) {
//[START video_detect_person_gcs_beta]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add space // [START

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

// Imports the Google Cloud Video Intelligence library
const Video = require('@google-cloud/video-intelligence').v1p3beta1;
// Creates a client
const video = new Video.VideoIntelligenceServiceClient();

/**
* TODO(developer): Uncomment the following line before running the sample.
*/
// const gcsUri = 'GCS URI of the video to analyze, e.g. gs://my-bucket/my-video.mp4';

const request = {
inputUri: gcsUri,
features: ['PERSON_DETECTION'],
videoContext: {
personDetectionConfig: {
// Must set includeBoundingBoxes to true to get poses and attributes.
includeBoundingBoxes: true,
includePoseLandmarks: true,
includeAttributes: true,
},
},
};
// Detects people in a video
const [operation] = await video.annotateVideo(request);
const results = await operation.promise();
console.log('Waiting for operation to complete...');

// Gets annotations for video
const personAnnotations =
results[0].annotationResults[0].personDetectionAnnotations;

for (const {tracks} of personAnnotations) {
console.log('Person detected:');

for (const {segment, timestampedObjects} of tracks) {
if (segment.startTimeOffset.seconds === undefined) {
segment.startTimeOffset.seconds = 0;
}
if (segment.startTimeOffset.nanos === undefined) {
segment.startTimeOffset.nanos = 0;
}
if (segment.endTimeOffset.seconds === undefined) {
segment.endTimeOffset.seconds = 0;
}
if (segment.endTimeOffset.nanos === undefined) {
segment.endTimeOffset.nanos = 0;
}
console.log(
`\tStart: ${segment.startTimeOffset.seconds}` +
`.${(segment.startTimeOffset.nanos / 1e6).toFixed(0)}s`
);
console.log(
`\tEnd: ${segment.endTimeOffset.seconds}.` +
`${(segment.endTimeOffset.nanos / 1e6).toFixed(0)}s`
);

// Each segment includes timestamped objects that
// include characteristic--e.g. clothes, posture
// of the person detected.
const [firstTimestampedObject] = timestampedObjects;

// Attributes include unique pieces of clothing,
// poses, or hair color.
for (const {name, value} of firstTimestampedObject.attributes) {
console.log(`\tAttribute: ${name}; ` + `Value: ${value}`);
}

// Landmarks in person detection include body parts.
for (const {name, point} of firstTimestampedObject.landmarks) {
console.log(`\tLandmark: ${name}; Vertex: ${point.x}, ${point.y}`);
}
}
}
// [END video_detect_person_beta]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

video_detect_person_gcs_beta

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Acknowledged.

It's already fixed :).

}
async function detectFaces(path) {
//[START video_detect_faces_beta]
// Imports the Google Cloud Video Intelligence library + Node's fs library
const Video = require('@google-cloud/video-intelligence').v1p3beta1;
const fs = require('fs');
// Creates a client
const video = new Video.VideoIntelligenceServiceClient();

/**
* TODO(developer): Uncomment the following line before running the sample.
*/
// const path = 'Local file to analyze, e.g. ./my-file.mp4';

// Reads a local video file and converts it to base64
const file = fs.readFileSync(path);
const inputContent = file.toString('base64');

const request = {
inputContent: inputContent,
features: ['FACE_DETECTION'],
videoContext: {
faceDetectionConfig: {
// Must set includeBoundingBoxes to true to get facial attributes.
includeBoundingBoxes: true,
includeAttributes: true,
},
},
};
// Detects faces in a video
const [operation] = await video.annotateVideo(request);
const results = await operation.promise();
console.log('Waiting for operation to complete...');

// Gets annotations for video
const faceAnnotations =
results[0].annotationResults[0].faceDetectionAnnotations;

for (const {tracks} of faceAnnotations) {
console.log('Face detected:');
for (const {segment, timestampedObjects} of tracks) {
if (segment.startTimeOffset.seconds === undefined) {
segment.startTimeOffset.seconds = 0;
}
if (segment.startTimeOffset.nanos === undefined) {
segment.startTimeOffset.nanos = 0;
}
if (segment.endTimeOffset.seconds === undefined) {
segment.endTimeOffset.seconds = 0;
}
if (segment.endTimeOffset.nanos === undefined) {
segment.endTimeOffset.nanos = 0;
}
console.log(
`\tStart: ${segment.startTimeOffset.seconds}` +
`.${(segment.startTimeOffset.nanos / 1e6).toFixed(0)}s`
);
console.log(
`\tEnd: ${segment.endTimeOffset.seconds}.` +
`${(segment.endTimeOffset.nanos / 1e6).toFixed(0)}s`
);

// Each segment includes timestamped objects that
// include characteristics of the face detected.
const [firstTimestapedObject] = timestampedObjects;

for (const {name} of firstTimestapedObject.attributes) {
// Attributes include unique pieces of clothing, like glasses,
// poses, or hair color.
console.log(`\tAttribute: ${name}; `);
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

// [END video_detect_faces_beta]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

}
async function detectFacesGCS(gcsUri) {
//[START video_detect_faces_gcs_beta]
// Imports the Google Cloud Video Intelligence library
const Video = require('@google-cloud/video-intelligence').v1p3beta1;
// Creates a client
const video = new Video.VideoIntelligenceServiceClient();

/**
* TODO(developer): Uncomment the following line before running the sample.
*/
// const gcsUri = 'GCS URI of the video to analyze, e.g. gs://my-bucket/my-video.mp4';

const request = {
inputUri: gcsUri,
features: ['FACE_DETECTION'],
videoContext: {
faceDetectionConfig: {
// Must set includeBoundingBoxes to true to get facial attributes.
includeBoundingBoxes: true,
includeAttributes: true,
},
},
};
// Detects faces in a video
const [operation] = await video.annotateVideo(request);
const results = await operation.promise();
console.log('Waiting for operation to complete...');

// Gets annotations for video
const faceAnnotations =
results[0].annotationResults[0].faceDetectionAnnotations;

for (const {tracks} of faceAnnotations) {
console.log('Face detected:');

for (const {segment, timestampedObjects} of tracks) {
if (segment.startTimeOffset.seconds === undefined) {
segment.startTimeOffset.seconds = 0;
}
if (segment.startTimeOffset.nanos === undefined) {
segment.startTimeOffset.nanos = 0;
}
if (segment.endTimeOffset.seconds === undefined) {
segment.endTimeOffset.seconds = 0;
}
if (segment.endTimeOffset.nanos === undefined) {
segment.endTimeOffset.nanos = 0;
}
console.log(
`\tStart: ${segment.startTimeOffset.seconds}.` +
`${(segment.startTimeOffset.nanos / 1e6).toFixed(0)}s`
);
console.log(
`\tEnd: ${segment.endTimeOffset.seconds}.` +
`${(segment.endTimeOffset.nanos / 1e6).toFixed(0)}s`
);

// Each segment includes timestamped objects that
// include characteristics of the face detected.
const [firstTimestapedObject] = timestampedObjects;

for (const {name} of firstTimestapedObject.attributes) {
// Attributes include unique pieces of clothing, like glasses,
// poses, or hair color.
console.log(`\tAttribute: ${name}; `);
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

// [END video_detect_faces_gcs_beta

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

}

async function main() {
require(`yargs`)
.demand(1)
.command(
`video-person-gcs <gcsUri>`,
`Detects people in a video stored in Google Cloud Storage using the Cloud Video Intelligence API.`,
{},
opts => detectPersonGCS(opts.gcsUri)
)
.command(
`video-person <path>`,
`Detects people in a video stored in a local file using the Cloud Video Intelligence API.`,
{},
opts => detectPerson(opts.path)
)
.command(
`video-faces-gcs <gcsUri>`,
`Detects faces in a video stored in Google Cloud Storage using the Cloud Video Intelligence API.`,
{},
opts => detectFacesGCS(opts.gcsUri)
)
.command(
`video-faces <path>`,
`Detects faces in a video stored in a local file using the Cloud Video Intelligence API.`,
{},
opts => detectFaces(opts.path)
)
.example(`node $0 video-person ./resources/googlework_short.mp4`)
.example(
`node $0 video-person-gcs gs://cloud-samples-data/video/googlework_short.mp4`
)
.example(`node $0 video-faces ./resources/googlework_short.mp4`)
.example(
`node $0 video-faces-gcs gs://cloud-samples-data/video/googlework_short.mp4`
)
.wrap(120)
.recommendCommands()
.epilogue(
`For more information, see https://cloud.google.com/video-intelligence/docs`
)
.help()
.strict().argv;
}

main().catch(console.error);
Loading