AWS Open Source Blog
FFmpeg is an industry standard, open source, widely used utility for handling video. FFmpeg has many capabilities, including encoding and decoding all video compression formats, encoding and decoding audio, encapsulating and extracting audio and video from transport streams, and much more.
If AWS customers want to use FFmpeg on AWS, they have to maintain FFmpeg by themselves through an Amazon Elastic Compute Cloud (Amazon EC2) instance and develop a workflow manager to ingest and manipulate media assets. It’s painful.
In this post, I will show how to integrate FFmpeg with AWS Services to build a more easily managed FFmpeg. We’ve created an open source solution to deploy FFmpeg packaged in a container and managed by AWS Batch. When finished, you will execute an FFmpeg command as a job through a REST API. This solution improves usability and offers relief from the management learning curve and maintenance costs of open source FFmpeg on AWS.
Solution overview
AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. There is no additional charge for AWS Batch, you pay only for AWS compute resources. AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted. With AWS Batch, there is no need to install and manage batch computing software or server clusters that you use to run your jobs, allowing you to focus on analyzing results and solving problems. AWS Batch plans, schedules, and executes your batch computing workloads across the full range of AWS compute services and features, such as Amazon EC2 and Spot Instances.
As of February 2023, AWS offers 15 general purpose EC2 instance families, 11 compute optimized instance families and 14 accelerated computing instances. By correlating each instance family specification with the FFmpeg hardware acceleration API, we’ve highlighted these instances that can optimize the performance of FFmpeg:
- NVIDIA GPU-powered Amazon EC2 instances : P3 instance family comes equipped with the NVIDIA Tesla V100 GPU. G4dn instance family is powered by NVIDIA T4 GPUs and Intel Cascade Lake CPUs. These GPUs are well suited for video coding workloads and offers enhanced hardware-based encoding/decoding (NVENC/NVDEC).
- Xilinx media accelerator cards : VT1 instances are powered by up to 8 Xilinx® Alveo U30 media accelerator cards and support up to 96 vCPUs, 192GB of memory, 25 Gbps of enhanced networking, and 19 Gbps of EBS bandwidth. The Xilinx Video SDK includes an enhanced version of FFmpeg that can communicate with the hardware accelerated transcode pipeline in Xilinx devices. VT1 instances deliver up to 30% lower cost per stream than Amazon EC2 GPU-based instances and up to 60% lower cost per stream than Amazon EC2 CPU-based instances.
- EC2 instances powered by Intel : M6i/C6i instances are powered by 3rd generation Intel Xeon Scalable processors (code named Ice Lake) with an all-core turbo frequency of 3.5 GHz.
- AWS Graviton-bases instances : Encoding video on C7g instances, the last AWS Graviton processor family, costs measured 29% less for H.264 and 18% less for H.265 compared to C6i, as described in this blog post ‘Optimized Video Encoding with FFmpeg on AWS Graviton’
- AMD-powered EC2 instances: M6a instances are powered by 3rd generation AMD EPYC processors (code named Milan).
- Serverless compute with Fargate: Fargate allows to have a completely serverless architecture for your batch jobs. With Fargate, every job receives the exact amount of CPU and memory that it requests.
We are going to create a managed video encoding pipeline using AWS Batch with FFmpeg in container images. For example, you will be able to make a simple transmuxing operation, add an audio silent track, extract audio/video track, change video container file, concatenate video files, generate thumbnails, or create a timelapse. As a starting point, this pipeline uses Intel (C5), Graviton (C6g), Nvidia (G4dn), AMD (C5a, M5a), and Fargate instance families.
The architecture includes 5 key components:
- Container images are stored in a Amazon Elastic Container Registry (Amazon ECR). Each container includes an FFmpeg library with a Python wrapper. Container images are specialized per CPU/GPU architecture : ARM64, x86-64, NVIDIA.
- AWS Batch is configured with a queue and compute environment per CPU architecture. AWS Batch schedules job queues using Spot Instance compute environments only, to optimize cost.
- Customers submit jobs through AWS SDKs with the ‘SubmitJob’ operation or use the Amazon API Gateway REST API to easily submit a job with any HTTP library.
- All media assets ingested and produced are stored in an Amazon Simple Storage Service (Amazon S3)
- Observability is managed by Amazon CloudWatch and AWS X-Ray. All X-Ray traces are exported on Amazon S3 to benchmark which compute architecture is better for a specific FFmpeg command.
Prerequisites
You need the following prerequisites to set up the solution:
- An AWS account with privileges to create AWS Identity and Access Management (IAM) roles and policies. For more information, see Overview of access management: Permissions and policies.
- Latest version of AWS Cloud Development Kit (AWS CDK) with bootstrapping already done.
- Latest version of Task.
- Latest version of Docker.
- Latest version of Python 3.
Deploy the solution with AWS CDK
To deploy the solution “AWS Batch with FFmpeg” on your account, complete the following steps:
- Clone the GitHub repository https://github.com/aws-samples/aws-batch-with-ffmpeg
- Execute this list of commands:
# Create a local Python virtual environment and install requirements
task venv
# Activate the Python virtual environment
source .venv/bin/activate
# Deploy the CDK stack
task cdk:deploy
# Collect AWS CloudFormation outputs from the stack
task env
# Build and push docker images for AMD64 processor architecture
task app:docker-amd64
# Build and push docker images for ARM64 processor architecture
task app:docker-arm64
# Build and push docker images for NVIDIA processor architecture
task app:docker-nvidia
AWS CDK outputs the new Amazon S3 bucket where you can upload and download video assets, and the Amazon API Gateway REST endpoint with which you can submit video jobs.
Use the solution
Once the “AWS Batch with FFmpeg” solution is installed, you can execute FFmpeg commands with the AWS SDKs, the AWS Command Line Interface (AWS CLI) or the API. The solution respects the typical syntax of the FFmpeg command described in the official documentation:
ffmpeg [global_options] {[input_file_options] -i input_url} ... {[output_file_options] output_url} ...
Parameters of the solution are:
global_options
: FFmpeg global options described in the official documentation.input_file_options
: FFmpeg input file options described in the official documentation.input_url
: AWS S3 url synced to the local storage and tranformed to local path by the solution.output_file_options
: FFmpeg output file options described in the official documentation.output_url
: AWS S3 url synced from the local storage to AWS S3 storage.compute
: Instances family used to compute the media asset : intel, arm, amd, nvidia,name
: metadata of this job for observability.
In this example, we use the AWS SDK for Python (Boto3) and we want to cut a specific part of a video. As a prerequisite, we uploaded a video in the Amazon S3 bucket created by the solution. Now, we complete the parameters below:
import boto3
import requests
from urllib.parse import urlparse
from aws_requests_auth.boto_utils import BotoAWSRequestsAuth
# AWS CloudFormation output of the Amazon S3 bucket created by the solution : s3://batch-ffmpeg-stack-bucketxxxx/
s3_bucket_url = "<S3_BUCKET>"
# Amazon S3 key of the media Asset uploaded on S3 bucket, to compute by FFmpeg command : test/myvideo.mp4
s3_key_input = "<MEDIA_ASSET>"
# Amazon S3 key of the result of FFmpeg Command : test/output.mp4
s3_key_output = "<MEDIA_ASSET>"
# EC2 instance family : `intel`, `arm`, `amd`, `nvidia`, `fargate`
compute = "intel"
job_name = "clip-video"
command={
"name": job_name,
#"global_options": "",
"input_url" : s3_bucket_url + s3_key_input,
#"input_file_options" : "",
"output_url" : s3_bucket_url + s3_key_output,
"output_file_options": "-ss 00:00:10 -t 00:00:15 -c:v copy -c:a copy"
}
And, I submit the FFmpeg command with the AWS SDK for Python (Boto3) :
batch = boto3.client("batch")
result = batch.submit_job(
jobName=job_name,
jobQueue="batch-ffmpeg-job-queue-" + compute,
jobDefinition="batch-ffmpeg-job-definition-" + compute,
parameters=command,
)
We can also submit the same FFmpeg command with the REST API through an HTTP POST method. I control access to this Amazon API Gateway API with IAM permissions:
# AWS Signature Version 4 Signing process with Python Requests
def apig_iam_auth(rest_api_url):
domain = urlparse(rest_api_url).netloc
auth = BotoAWSRequestsAuth(
aws_host=domain, aws_region="<AWS_REGION>", aws_service="execute-api"
)
return auth
# AWS CloudFormation output of the Amazon API Gateway REST API created by the solution : https://xxxx.execute-api.xx-west-1.amazonaws.com/prod/
api_endpoint = "<API_ENDPOINT>"
auth = apig_iam_auth(api_endpoint)
url= api_endpoint + compute + '/ffmpeg'
response = requests.post(url=url, json=command, auth=auth, timeout=2)
Per default, AWS Batch chooses an available EC2 instance type. If you want to override it, you can add the `nodeOverride`
property when you submit a job with the SDK:
instance_type = 'c5.large'
result = batch.submit_job(
jobName=job_name,
jobQueue="batch-ffmpeg-job-queue-" + compute,
jobDefinition="batch-ffmpeg-job-definition-" + compute,
parameters=command,
nodeOverrides={
"nodePropertyOverrides": [
{
"targetNodes": "0,n",
"containerOverrides": {
"instanceType": instance_type,
},
},
]
},
)
And with the REST API :
command['instance_type'] = instance_type
url= api_endpoint + compute + '/ffmpeg'
response = requests.post(url=url, json=command, auth=auth, timeout=2)
Metrics
AWS Customers also want to use this solution to benchmark the video encoding performance of Amazon EC2 instance families. This solution analyzes performance and video quality metrics with AWS X-Ray.
AWS X-Ray helps developers analyze and debug applications. With X-Ray, we can understand how our application and its underlying services are performing to identify and troubleshoot the cause of performance issues.
We defined 3 X-Ray segments: Amazon S3 download, FFmpeg Execution, and Amazon S3 upload.
In the AWS Console (AWS Systems Manager > Parameter Store), switch the AWS Systems Manager Parameter /batch-ffmpeg/ffqm to TRUE : The video quality metrics PSNR, SSIM, VMAF are then calculated by FFmpeg and exported as AWS X-RAY metadata and as a JSON file uploaded in the Amazon S3 bucket with the key prefix /metrics/ffqm.
All JSON files are crawled by an Amazon Glue Crawler. This crawler provides an Amazon Athena table with which you can execute SQL requests to analyse the performance of our workload.
For example, we created a visual bar chart with Amazon QuickSight where our Athena table is our dataset. As shown in the chart here, for the job name compress-video launched with several instance types, the most efficient instance type is c5a.2xlarge.
Extend the solution
You can customize and extend the solution as you want. For example, you can customize the FFmpeg Docker image by adding libraries or upgrading the FFmpeg version. All docker files are located in application/docker-images/. You can customize the list of Amazon EC2 instances used by the solution with new instance types to optimize performance, updating the CDK stack located in this CDK file cdk/batch_job_ffmpeg_stack.py.
Cost
There is no additional charge for AWS Batch. You pay only for AWS resources created to store assets and run the solution. We use Spot instances to optimize the cost. With metrics provided by AWS X-Ray, you can benchmark all instances to find the best one for your use case.
Cleanup
To avoid incurring unnecessary charges, clean up the resources you created for testing this solution.
- Delete all objects in the Amazon S3 bucket.
- Inside the Git repository, execute this command in a terminal :
task cdk:destroy
Summary
In this post, we covered the process of setting up an FFmpeg workflow managed by AWS Batch. The solution includes an option to benchmark the video encoding performance of Amazon EC2 instance families. The solution is managed by AWS Batch, a scalable, and cost effective service using EC2 Spot instances.
This solution is open source and available at http://github.com/aws-sample/aws-batch-with-ffmpeg/. You can give us feedback through GitHub issues.
0 Response to "AWS Open Source Blog"
Post a Comment