LILT accelerates multilingual content creation for enterprises at scale with NVIDIA GPUs and NVIDIA NeMo™.
LILT
AWS
Generative AI / LLMs
NVIDIA A100 Tensor Core GPUs
NVIDIA T4 Tensor Core GPUs
NVIDIA NeMo™
LILT’s generative AI platform can translate a large amount of content rapidly.
When a European law enforcement agency needed a dynamic solution to translate high volumes of content in low-resource languages within tight time constraints, they turned to LILT. LILT's generative AI platform, powered by large language models, enabled faster translation of time-sensitive information at scale by leveraging NVIDIA GPUs and NVIDIA NeMo, an end-to-end enterprise framework for building, customizing, and deploying generative AI models.
The law enforcement agency collects high volumes of evidence written in low-resource foreign languages, which have limited amounts of digital resources and data available for efficient translation. Unfortunately, time limitations set for investigation and evidence admissibility, coupled with a limited pool of linguists available for translation, added constraints and threatened the agency’s ability to prosecute criminals.
With a versatile workflow of “at desk” tools, LILT’s generative AI platform has prevented these resource issues from limiting the agency's effectiveness in stopping crime. Non-linguists work autonomously, using machine translation to triage a variety of documents for high-value content, including sending bulk translations through their API to the cloud. Linguists can then focus their time and skills on translating evidentiary documents using LILT’s predictive, adaptive translation tool. This workflow has been deployed in major operations, aiding in hundreds of criminal arrests and the confiscation of illegal drugs and weapons.
Image courtesy of CCC
Leveraging LILT and NVIDIA GPUs, the agency achieved translation throughput rates of up to 150,000 words per minute (benchmarked across four languages). The deployment on AWS supports scaling far beyond what was possible on premises. Scalable GPU resources easily increase throughput during peak workloads, meaning that no end user is waiting for a queue to be processed and no mission is left without adequate support and resourcing.
To achieve this result, LILT accelerated model training with NVIDIA A100 Tensor Core GPUs and model inference with NVIDIA T4 Tensor Core GPUs. Using models developed with NeMo, LILT’s platform can deliver up to a 30X character throughput boost in inference performance compared to equivalent models running on CPUs. In addition, the NVIDIA AI platform enables LILT to scale their model size by 5X with significant improvement, not only in latency, but also in quality.
NeMo is included as a part of NVIDIA AI Enterprise, which provides a production-grade, secure, end-to-end software platform for enterprises building and deploying accelerated AI software.
LILT’s adaptive machine learning models are constantly improving, specifically when content is reviewed and linguists provide input. This is then leveraged as training data for model fine-tuning. As a result of this continuous context enhancement, LILT’s dynamic tools keep up with ever-changing colloquial language across social media and other crucial content sources. LILT also deploys a multi-faceted workflow with applications for linguists and non-linguists alike, allowing each team to work autonomously and make the most of their unique skillset to drive ultimate efficiency in time-sensitive situations.
Learn more about how other organizations are leveraging LILT to improve their own operations and customer experiences.
Image courtesy of CCC
LILT is a member of NVIDIA Inception, a free program that nurtures startups revolutionizing industries with technological advancements. As an Inception partner, LILT has taken advantage of cloud credits, marketing opportunities, hardware discounts, venture capital, NVIDIA Deep Learning Institute credits, and NVIDIA expert technical support.
What Is NVIDIA Inception?
NVIDIA Inception Program Benefits
Join NVIDIA Inception’s global network of over 15,000 technology startups.