Rapid Application Development Using Large Language Models (RADLLM)

 

Résumé du cours

Recent advancements in both the techniques and accessibility of large language models (LLMs) have opened up unprecedented opportunities for businesses to streamline their operations, decrease expenses, and increase productivity at scale. Enterprises can also use LLM-powered apps to provide innovative and improved services to clients or strengthen customer relationships. For example, enterprises could provide customer support via AI virtual assistants or use sentiment analysis apps to extract valuable customer insights.

In this course, you’ll gain a strong understanding and practical knowledge of LLM application development by exploring the open-sourced ecosystem, including pretrained LLMs, that can help you get started quickly developing LLM-based applications.

Please note that once a booking has been confirmed, it is non-refundable. This means that after you have confirmed your seat for an event, it cannot be cancelled and no refund will be issued, regardless of attendance.

Moyens Pédagogiques :
  • Quiz pré-formation de vérification des connaissances (si applicable)
  • Réalisation de la formation par un formateur agréé par l’éditeur
  • Formation réalisable en présentiel ou en distanciel
  • Mise à disposition de labs distants/plateforme de lab pour chacun des participants (si applicable à la formation)
  • Distribution de supports de cours officiels en langue anglaise pour chacun des participants
    • Il est nécessaire d'avoir une connaissance de l'anglais technique écrit pour la compréhension des supports de cours
Moyens d'évaluation :
  • Quiz pré-formation de vérification des connaissances (si applicable)
  • Évaluations formatives pendant la formation, à travers les travaux pratiques réalisés sur les labs à l’issue de chaque module, QCM, mises en situation…
  • Complétion par chaque participant d’un questionnaire et/ou questionnaire de positionnement en amont et à l’issue de la formation pour validation de l’acquisition des compétences

Pré-requis

  • Introductory deep learning, with comfort with PyTorch and transfer learning preferred. Content covered by DLI’s Getting Started with Deep Learning or Fundamentals of Deep Learning courses, or similar experience is sufficient.
  • Intermediate Python experience, including object-oriented programming and libraries. Content covered by Python Tutorial (w3schools.com) or similar experience is sufficient.

Objectifs

By participating in this workshop, you’ll learn how to:

  • Find, pull in, and experiment with the HuggingFace model repository and the associated transformers API
  • Use encoder models for tasks like semantic analysis, embedding, question-answering, and zero-shot classification
  • Use decoder models to generate sequences like code, unbounded answers, and conversations
  • Use state management and composition techniques to guide LLMs for safe, effective, and accurate conversation

Suite de parcours

Contenu

Introduction

  • Meet the instructor.
  • Create an account at courses.nvidia.com/join

From Deep Learning to Large Language Models

  • Learn how large language models are structured and how to use them:
    • Review deep learning- and class-based reasoning, and see how language modeling falls out of it.
    • Discuss transformer architectures, interfaces, and intuitions, as well as how they scale up and alter to make state-of-the-art LLM solutions.

Specialized Encoder Models

  • Learn how to look at the different task specifications:
    • Explore cutting-edge HuggingFace encoder models.
    • Use already-tuned models for interesting tasks such as token classification, sequence classification, range prediction, and zero-shot classification.

Encoder-Decoder Models for Seq2Seq

  • Learn about forecasting LLMs for predicting unbounded sequences:
    • Introduce a decoder component for autoregressive text generation.
    • Discuss cross-attention for sequence-as-context formulations.
    • Discuss general approaches for multi-task, zero-shot reasoning.
    • Introduce multimodal formulation for sequences, and explore some examples.

Decoder Models for Text Generation

  • Learn about decoder-only GPT-style models and how they can be specified and used:
    • Explore when decoder-only is good, and talk about issues with the formation.
    • Discuss model size, special deployment techniques, and considerations.
    • Pull in some large text-generation models, and see how they work.

Stateful LLMs

  • Learn how to elevate language models above stochastic parrots via context injection:
    • Show off modern LLM composition techniques for history and state management.
    • Discuss retrieval-augmented generation (RAG) for external environment access.

Assessment and Q&A

  • Review key learnings.
  • Take a code-based assessment to earn a certificate.

Prix & Delivery methods

Formation en ligne

Durée
1 jour

Prix
  • US$ 500,–

Agenda

Délai d’accès – inscription possible jusqu’à la date de formation
Instructor-led Online Training :   Cours en ligne avec instructeur
Formation en mode FLEX, à la fois à distance et en présentiel. Tous nos cours FLEX sont aussi des ILO (Instructor-Led Online).

Anglais

Fuseau horaire : Heure d'été d'Europe centrale (HAEC)   ±1 heure

Formation en ligne Fuseau horaire : Heure d'été d'Europe centrale (HAEC)
Formation en ligne Fuseau horaire : Heure normale d'Europe centrale (HNEC)

6 heures de différence

Formation en ligne Fuseau horaire : Eastern Standard Time (EST)
Formation en ligne Fuseau horaire : Eastern Standard Time (EST)

9 heures de différence

Formation en ligne Fuseau horaire : Pacific Daylight Time (PDT)
Délai d’accès – inscription possible jusqu’à la date de formation
Formation en mode FLEX, à la fois à distance et en présentiel. Tous nos cours FLEX sont aussi des ILO (Instructor-Led Online).

Europe

Allemagne

Berlin
Francfort
Berlin

Si vous ne trouvez pas de date adéquate, n'hésitez pas à vérifier l'agenda de toutes nos formations FLEX internationales