Enabling Fast-path to Large-Language-Model Based AI applications
, Principal Product Manager- Conversational AI and Deep Learning, NVIDIA
Highly Rated
Large-language-models (LLMs) are ever-growing and incredibly powerful. Yet, they require deep technical expertise and massive amounts of compute to deploy for applications, making them inaccessible to get started for most. However, advances such as p-tuning have changed how practitioners can apply LLMs across workloads and industries such as content generation, summarization, chatbots, healthcare, drug discovery, marketing, code generation, etc.
In this talk, we will highlight several paths to develop AI applications involving customized LLMs with hundreds of billions of parameters, making LLMs accessible to all software developers. State-of-the-art techniques like p-tuning allow for customization of LLMs for specific use-cases. Rapid prototyping, experimentation, and development of LLMs is unlocked through the NVIDIA AI platform.