登録して、広範囲にわたるトピックや業界のユースケースに関する奥が深いディスカッションに参加しましょう。
(※英語対応のみ)
GTC でエキスパートとつながる
イベントに参加して業界のリーダーと探求心を追求するディスカッションや質疑応答の機会をもらいましょう。
イベントに参加して業界のリーダーと探求心を追求するディスカッションや質疑応答の機会をもらいましょう。
NVIDIA GTC におけるエキスパートとのつながりは、NVIDIA の製品や研究を支える聡明な頭脳を持つ方々と、グループまたは 1 対 1 で交流するユニークな機会を提供しています。
お客様の技術的な質問に対する「ハウツー」と「ベスト プラクティス」をお答えする、60 分間のライブ Q&A セッションを予定しています。
登録して、広範囲にわたるトピックや業界のユースケースに関する奥が深いディスカッションに参加しましょう。
(※英語対応のみ)
Meet Kaggle grandmasters and learn how to approach and succeed in different types of Kaggle competitions including tabular, image, natural language processing, and physics. Explore solutions and see how NVIDIA GPUs create top-performing models. Also learn how NVIDIA RAPIDS is allowing more possibilities with GPUs. Kaggle is an online platform that challenges participants to build models from real-world data to solve real-world problems while competing for highest model accuracy. NVIDIA RAPIDS is an open-source library that allows data scientists to build entire pipelines on GPU. RAPIDS accelerates feature search and engineering, as well as model training, validation, and inference.
Learn about lossless compression algorithms for GPUs, and how to leverage them to accelerate GPU-GPU and GPU-CPU data transfers. We'll cover some new compression schemes, as well as existing methods tuned for better performance on the GPU. Prior knowledge of common compression methods is recommended, but not required.
NVIDIA GPUs accelerate the most important applications in quantum chemistry (like Gaussian, VASP, Quantum ESPRESSO, GAMESS, NWChem, and CP2K) and molecular dynamics (like GROMACS, NAMD, LAMMPS, and Amber) that are also very popular in materials science, biophysics, drug discovery, and other domains. We'll answer your questions about how to get the best performance for your specific workload or figure out how you can benefit from accelerated computing.
Get your questions answered on how to build and deploy vision AI applications for traffic engineering, parking management, sports analytics, retail, or smart work spaces for occupancy analytics and more. We have optimized tools that can help you build real-time, seamless intelligent video analytics pipelines from edge to the cloud and reduce your design and development cycles significantly.
Conversational AI is the application of machine learning to develop language-based apps that allow humans to interact naturally with machines. In the past few years, deep learning has improved the state of the art in conversational AI and offered superhuman accuracy on certain tasks. Deep learning has also reduced the need for deep knowledge of linguistics and rule-based techniques for building language services, which has led to widespread adoption across industries. Learn how to use NVIDIA Conv AI frameworks and toolkits — Riva, TAO Toolkit, Nemo, and GPU-accelerated Kaldi — to train and deploy conversational AI applications.
See a short presentation on NVIDIA's optimizations in the Deep Learning Frameworks. We'll also talk about the optimized models available in Deep Learning Example repository. Then we'll cover scaling using NCCL, and finally we'll address debuggability and profiling using DLProf for TensorFlow and PyProf for PyTorch. After the presentation, we'll have an open forum and Q&A session with the experts.
Are you wondering how to easily access tensor cores through NVIDIA Math Libraries, such as sparse tensor cores introduced with the NVIDIA Ampere Architecture GPUs? Or have you already used our libraries and have questions or feedback? Meet the engineers who create tensor core accelerated libraries (cuBLAS, cuSPARSE, cuSOLVER, cuTENSOR, and CUTLASS) to get answers to your questions, give your feedback, or to discuss new functionality that you think should be added.
We'll deep dive into how to optimally prepare, train, and deploy recommender systems on the GPU. Experts will be available to speak with you one-on-one in breakout rooms about your specific questions and problems related to recommender system ETL, training, and inference. We'll go over some of the tools and technologies that NVIDIA has been building in the recommender system space and will also touch on techniques for optimizing commonly used frameworks for recommender workflows. If you're interested in speeding up your recommender system, gaining a feature engineering advantage like the one we used to win the 2020 RecSys Challenge, or overcoming the challenges of taking trained models and deploying them into production, this session is for you.
We'll answer questions on NVIDIA Maxine SDK for video conferencing services. Applications based on Maxine can reduce video bandwidth usage down to one-tenth of H.264 using AI video compression. Maxine includes APIs for the latest innovations from NVIDIA research, such as face alignment, gaze correction, face re-lighting, and real time translation, in addition to capabilities such as super-resolution, noise removal, closed captioning, and virtual assistants. These capabilities are fully accelerated on NVIDIA GPUs to run in real-time video streaming applications in the cloud. Applications built with Maxine can easily be deployed as micro-services that scale to hundreds of thousands of streams in a Kubernetes environment.
Learn more about NVIDIA's CloudXR platform. Pro Virtualization experts will take you on a deep dive of NVIDIA streaming XR technology. We'll cover the fundamentals of streaming, wireless 5G networks, and extended reality. With recent developments in the cloud service provider space, the CloudXR ecosystem is expanding to be a part of many platforms, including Amazon Web Services. This Connect with Experts session will walk attendees through the process of using the CloudXR SDK, as well as running CloudXR on Amazon Web Services.
Developing solutions for Vison, Autonomous Machines or Robotics? Share your challenges with our experts in Embedded Edge AI Development, the Jetson platfrom and SDK's including ISAAC, DeepStream, and TLT.
Perception Development remains to be a challenging topic for autonomous vehicles. Fidelity in simulation is increasing rapidely and moving the boundaries of perception development.
Deep Learning, Machine Learning and Data science is evolving at an unprecedented rate. Almost every day we see new tools and algorithms emerge making impossible possible but at the same time adding layers of complexity to already challenging field.
To support you, we’re hosting an interactive session with NVIDIA experts so that you can get your toughest questions answered.
Join us to attend 1:1 chats or group sessions to discuss your projects and challenges with our experts. Example topics include:
- State of the art algorithms and tools
- Choosing and optimizing models for production with tools like TensorRT or Triton Inference server.
- Profiling training and inference bottlenecks in model implementation.
- GPU acceleration of traditional data science and machine learning workloads with RAPIDS. cuDF (GPU accelerated equivalent of Pandas), cuML (SciKit Learn and XGBoost), cuGraph (NetworkX) and their multi-GPU / multi node implementation with Dask.
Key Takeaways:
- Explore Omniverse, the powerful new collaboration platform for 3D production pipelines
Learn how to overcome the biggest challenges of virtualization
Learn how NVIDIA RTX Server™ powers mixed workloads for the most intense graphics and compute workflows for virtualization, rendering, data science, simulation, scientific visualization and augmented/virtual reality
Learn how CloudXR supports enterprises to integrate AR and VR into their workflows and how it operates across 5G and WiFi networks.
GPUs have the ability to transofrm Healthcare and life sciences, through artificial intelligence for assisting physicians as well as and accelerating existing workloads for digital signal processing and image reconstruction.
Come talk to experts in GPU programming and code optimization, share your experience with them, and get guidance on how to achieve maximum performance on NVIDIA's platform.